---
title: LocalAI
---

<Info>
**`langchain-localai`는 LocalAI를 위한 서드파티 통합 패키지입니다. LangChain에서 LocalAI 서비스를 사용할 수 있는 간단한 방법을 제공합니다.**

소스 코드는 [Github](https://github.com/mkhludnev/langchain-localai)에서 확인할 수 있습니다.

</Info>

LocalAI Embedding class를 로드해보겠습니다. LocalAI Embedding class를 사용하려면 LocalAI 서비스가 어딘가에 호스팅되어 있어야 하며 embedding model을 구성해야 합니다. [localai.io/basics/getting_started/index.html](https://localai.io/basics/getting_started/index.html)[localai.io/features/embeddings/index.html](https://localai.io/features/embeddings/index.html)의 문서를 참조하세요.

```python
pip install -U langchain-localai
from langchain_localai import LocalAIEmbeddings
embeddings = LocalAIEmbeddings(
    openai_api_base="http://localhost:8080", model="embedding-model-name"
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
1세대 model(예: text-search-ada-doc-001/text-search-ada-query-001)로 LocalAI Embedding class를 로드해보겠습니다. 참고: 이러한 model은 권장되지 않습니다 - 여기를 참조하세요
from langchain_community.embeddings import LocalAIEmbeddings
embeddings = LocalAIEmbeddings(
    openai_api_base="http://localhost:8080", model="embedding-model-name"
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
import os

# if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass through
os.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080"

---

<Callout icon="pen-to-square" iconType="regular">
    [Edit the source of this page on GitHub.](https://github.com/langchain-ai/docs/edit/main/src/oss/python/integrations/text_embedding/localai.mdx)
</Callout>
<Tip icon="terminal" iconType="regular">
    [Connect these docs programmatically](/use-these-docs) to Claude, VSCode, and more via MCP for    real-time answers.
</Tip>
I