Copy
---
title: Friendli
---
> [Friendli](https://friendli.ai/)는 확장 가능하고 효율적인 배포 옵션을 통해 AI 애플리케이션 성능을 향상시키고 비용 절감을 최적화하며, 높은 수요의 AI 워크로드에 맞춤화되어 있습니다.
이 튜토리얼은 `Friendli`를 LangChain과 통합하는 방법을 안내합니다.
## Setup
`langchain_community`와 `friendli-client`가 설치되어 있는지 확인하세요.
```sh
pip install -U langchain-community friendli-client
FRIENDLI_TOKEN 환경 변수로 설정하세요.
Copy
import getpass
import os
if "FRIENDLI_TOKEN" not in os.environ:
os.environ["FRIENDLI_TOKEN"] = getpass.getpass("Friendi Personal Access Token: ")
meta-llama-3.1-8b-instruct입니다. 사용 가능한 모델은 friendli.ai/docs에서 확인할 수 있습니다.
Copy
from langchain_community.llms.friendli import Friendli
llm = Friendli(model="meta-llama-3.1-8b-instruct", max_tokens=100, temperature=0)
Usage
Frienli는 async API를 포함한 LLM의 모든 메서드를 지원합니다.
invoke, batch, generate, stream 기능을 사용할 수 있습니다.
Copy
llm.invoke("Tell me a joke.")
Copy
" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come"
Copy
llm.batch(["Tell me a joke.", "Tell me a joke."])
Copy
[" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come",
" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come"]
Copy
llm.generate(["Tell me a joke.", "Tell me a joke."])
Copy
LLMResult(generations=[[Generation(text=" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come")], [Generation(text=" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come")]], llm_output={'model': 'meta-llama-3.1-8b-instruct'}, run=[RunInfo(run_id=UUID('ee97984b-6eab-4d40-a56f-51d6114953de')), RunInfo(run_id=UUID('cbe501ea-a20f-4420-9301-86cdfcf898c0'))], type='LLMResult')
Copy
for chunk in llm.stream("Tell me a joke."):
print(chunk, end="", flush=True)
ainvoke, abatch, agenerate, astream.
Copy
await llm.ainvoke("Tell me a joke.")
Copy
" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come"
Copy
await llm.abatch(["Tell me a joke.", "Tell me a joke."])
Copy
[" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come",
" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come"]
Copy
await llm.agenerate(["Tell me a joke.", "Tell me a joke."])
Copy
LLMResult(generations=[[Generation(text=" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come")], [Generation(text=" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come")]], llm_output={'model': 'meta-llama-3.1-8b-instruct'}, run=[RunInfo(run_id=UUID('857bd88e-e68a-46d2-8ad3-4a282c199a89')), RunInfo(run_id=UUID('a6ba6e7f-9a7a-4aa1-a2ac-c8fcf48309d3'))], type='LLMResult')
Copy
async for chunk in llm.astream("Tell me a joke."):
print(chunk, end="", flush=True)
Copy
---
<Callout icon="pen-to-square" iconType="regular">
[Edit the source of this page on GitHub.](https://github.com/langchain-ai/docs/edit/main/src/oss/python/integrations/llms/friendli.mdx)
</Callout>
<Tip icon="terminal" iconType="regular">
[Connect these docs programmatically](/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers.
</Tip>