---
title: UpstageEmbeddings
---

이 노트북은 Upstage embedding model을 시작하는 방법을 다룹니다.

## Installation

`langchain-upstage` package를 설치합니다.

```bash
pip install -U langchain-upstage

Environment Setup

다음 environment variable을 설정해야 합니다:
import os

os.environ["UPSTAGE_API_KEY"] = "YOUR_API_KEY"

Usage

UpstageEmbeddings class를 초기화합니다.
from langchain_upstage import UpstageEmbeddings

embeddings = UpstageEmbeddings(model="solar-embedding-1-large")
embed_documents를 사용하여 text 또는 document list를 embedding합니다.
doc_result = embeddings.embed_documents(
    ["Sung is a professor.", "This is another document"]
)
print(doc_result)
embed_query를 사용하여 query string을 embedding합니다.
query_result = embeddings.embed_query("What does Sung do?")
print(query_result)
비동기 작업을 위해 aembed_documentsaembed_query를 사용합니다.
# async embed query
await embeddings.aembed_query("My query to look up")
# async embed documents
await embeddings.aembed_documents(
    ["This is a content of the document", "This is another document"]
)

Using with vector store

UpstageEmbeddings를 vector store component와 함께 사용할 수 있습니다. 다음은 간단한 예제입니다.
from langchain_community.vectorstores import DocArrayInMemorySearch

vectorstore = DocArrayInMemorySearch.from_texts(
    ["harrison worked at kensho", "bears like to eat honey"],
    embedding=UpstageEmbeddings(model="solar-embedding-1-large"),
)
retriever = vectorstore.as_retriever()
docs = retriever.invoke("Where did Harrison work?")
print(docs)

---

<Callout icon="pen-to-square" iconType="regular">
    [Edit the source of this page on GitHub.](https://github.com/langchain-ai/docs/edit/main/src/oss/python/integrations/text_embedding/upstage.mdx)
</Callout>
<Tip icon="terminal" iconType="regular">
    [Connect these docs programmatically](/use-these-docs) to Claude, VSCode, and more via MCP for    real-time answers.
</Tip>
I