Copy
---
title: Llama.cpp
---
>[llama.cpp python](https://github.com/abetlen/llama-cpp-python) 라이브러리는 `@ggerganov`의
>[llama.cpp](https://github.com/ggerganov/llama.cpp)를 위한 간단한 Python 바인딩입니다.
>
>이 패키지는 다음을 제공합니다:
>
> - ctypes 인터페이스를 통한 C API에 대한 저수준 접근
> - 텍스트 완성을 위한 고수준 Python API
> - `OpenAI`와 유사한 API
> - `LangChain` 호환성
> - `LlamaIndex` 호환성
> - OpenAI 호환 웹 서버
> - 로컬 Copilot 대체
> - Function Calling 지원
> - Vision API 지원
> - Multiple Models
```python
pip install -qU llama-cpp-python
Copy
from langchain_community.embeddings import LlamaCppEmbeddings
Copy
llama = LlamaCppEmbeddings(model_path="/path/to/model/ggml-model-q4_0.bin")
Copy
text = "This is a test document."
Copy
query_result = llama.embed_query(text)
Copy
doc_result = llama.embed_documents([text])
Copy
---
<Callout icon="pen-to-square" iconType="regular">
[Edit the source of this page on GitHub.](https://github.com/langchain-ai/docs/edit/main/src/oss/python/integrations/text_embedding/llamacpp.mdx)
</Callout>
<Tip icon="terminal" iconType="regular">
[Connect these docs programmatically](/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers.
</Tip>