면책 조항: `LangChain decorators`는 LangChain 팀이 만든 것이 아니며 공식적으로 지원되지 않습니다.
LangChain decorators는 LangChain 위에 구축된 레이어로, 커스텀 langchain prompt와 chain을 작성하기 위한 syntactic sugar 🍭를 제공합니다 피드백, 이슈, 기여에 대해서는 여기에 이슈를 등록해주세요: ju-bezdek/langchain-decorators
주요 원칙과 이점:
  • pythonic한 코드 작성 방식
  • 들여쓰기로 인해 코드 흐름이 깨지지 않는 여러 줄 prompt 작성
  • 힌팅, 타입 체킹, 문서 팝업과 같은 IDE 내장 지원을 활용하여 함수를 빠르게 확인하고 prompt, 소비하는 parameter 등을 확인
  • 🦜🔗 LangChain 생태계의 모든 기능 활용
  • optional parameter 지원 추가
  • parameter를 하나의 class에 바인딩하여 prompt 간에 쉽게 공유
다음은 **LangChain Decorators ✨**로 작성된 간단한 예제입니다
@llm_prompt
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers")->str:
    """
    Write me a short header for my post about {topic} for {platform} platform.
    It should be for {audience} audience.
    (Max 15 words)
    """
    return

# run it naturally
write_me_short_post(topic="starwars")
# or
write_me_short_post(topic="starwars", platform="redit")

빠른 시작

설치

pip install langchain_decorators

예제

시작하는 좋은 방법은 여기 있는 예제를 검토하는 것입니다:

다른 parameter 정의하기

여기서는 llm_prompt decorator로 함수를 prompt로 표시하여 효과적으로 LLMChain으로 변환합니다. 실행하는 대신 표준 LLMchain은 inputs_variables와 prompt보다 훨씬 많은 init parameter를 받습니다… 여기서는 이 구현 세부사항이 decorator에 숨겨져 있습니다. 작동 방식은 다음과 같습니다:
  1. Global settings 사용:
# define global settings for all prompty (if not set - chatGPT is the current default)
from langchain_decorators import GlobalSettings

GlobalSettings.define_settings(
    default_llm=ChatOpenAI(temperature=0.0), this is default... can change it here globally
    default_streaming_llm=ChatOpenAI(temperature=0.0,streaming=True), this is default... can change it here for all ... will be used for streaming
)
  1. 미리 정의된 prompt types 사용
#You can change the default prompt types
from langchain_decorators import PromptTypes, PromptTypeSettings

PromptTypes.AGENT_REASONING.llm = ChatOpenAI()

# Or you can just define your own ones:
class MyCustomPromptTypes(PromptTypes):
    GPT4=PromptTypeSettings(llm=ChatOpenAI(model="gpt-4"))

@llm_prompt(prompt_type=MyCustomPromptTypes.GPT4)
def write_a_complicated_code(app_idea:str)->str:
    ...

  1. decorator에서 직접 settings 정의
from langchain_openai import OpenAI

@llm_prompt(
    llm=OpenAI(temperature=0.7),
    stop_tokens=["\nObservation"],
    ...
    )
def creative_writer(book_title:str)->str:
    ...

memory 및/또는 callbacks 전달:

이들 중 하나를 전달하려면 함수에서 선언하기만 하면 됩니다 (또는 kwargs를 사용하여 무엇이든 전달)

@llm_prompt()
async def write_me_short_post(topic:str, platform:str="twitter", memory:SimpleMemory = None):
    """
    {history_key}
    Write me a short header for my post about {topic} for {platform} platform.
    It should be for {audience} audience.
    (Max 15 words)
    """
    pass

await write_me_short_post(topic="old movies")

간소화된 streaming

streaming을 활용하려면:
  • prompt를 async function으로 정의해야 합니다
  • decorator에서 streaming을 켜거나 streaming이 켜진 PromptType을 정의할 수 있습니다
  • StreamingContext를 사용하여 stream을 캡처합니다
이 방식으로 어떤 prompt를 streaming할지만 표시하면 되며, 어떤 LLM을 사용해야 하는지, streaming handler를 생성하고 chain의 특정 부분에 배포하는 것에 대해 신경 쓸 필요가 없습니다… prompt/prompt type에서 streaming을 켜고 끄기만 하면 됩니다… streaming은 streaming context에서 호출할 때만 발생합니다… 여기서 stream을 처리하는 간단한 함수를 정의할 수 있습니다
# this code example is complete and should run as it is

from langchain_decorators import StreamingContext, llm_prompt

# this will mark the prompt for streaming (useful if we want stream just some prompts in our app... but don't want to pass distribute the callback handlers)
# note that only async functions can be streamed (will get an error if it's not)
@llm_prompt(capture_stream=True)
async def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
    """
    Write me a short header for my post about {topic} for {platform} platform.
    It should be for {audience} audience.
    (Max 15 words)
    """
    pass



# just an arbitrary  function to demonstrate the streaming... will be some websockets code in the real world
tokens=[]
def capture_stream_func(new_token:str):
    tokens.append(new_token)

# if we want to capture the stream, we need to wrap the execution into StreamingContext...
# this will allow us to capture the stream even if the prompt call is hidden inside higher level method
# only the prompts marked with capture_stream will be captured here
with StreamingContext(stream_to_stdout=True, callback=capture_stream_func):
    result = await run_prompt()
    print("Stream finished ... we can distinguish tokens thanks to alternating colors")


print("\nWe've captured",len(tokens),"tokens🎉\n")
print("Here is the result:")
print(result)

Prompt 선언

기본적으로 prompt는 전체 함수 docs이며, prompt를 표시하지 않는 한

prompt 문서화

<prompt> 언어 태그가 있는 코드 블록을 지정하여 docs의 어느 부분이 prompt 정의인지 지정할 수 있습니다
@llm_prompt
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
    """
    Here is a good way to write a prompt as part of a function docstring, with additional documentation for devs.

    It needs to be a code block, marked as a `<prompt>` language
    ```<prompt>
    Write me a short header for my post about {topic} for {platform} platform.
    It should be for {audience} audience.
    (Max 15 words)
Now only to code block above will be used as a prompt, and the rest of the docstring will be used as a description for developers. (It has also a nice benefit that IDE (like VS code) will display the prompt properly (not trying to parse it as markdown, and thus not showing new lines properly)) """ return

## Chat messages prompt

chat model의 경우 prompt를 message template 세트로 정의하는 것이 매우 유용합니다... 방법은 다음과 같습니다:

``` python
@llm_prompt
def simulate_conversation(human_input:str, agent_role:str="a pirate"):
    """
    ## System message
     - note the `:system` suffix inside the <prompt:_role_> tag


    ```<prompt:system>
    You are a {agent_role} hacker. You mus act like one.
    You reply always in code, using python or javascript code block...
    for example:

    ... do not reply with anything else.. just with code - respecting your role.

human message

(we are using the real role that are enforced by the LLM - GPT supports system, assistant, user)
Helo, who are you
a reply:
\``` python <<- escaping inner code block with \ that should be part of the prompt
def hello():
    print("Argh... hello you pesky pirate")
\```
we can also add some history using placeholder
{history}
{human_input}
Now only to code block above will be used as a prompt, and the rest of the docstring will be used as a description for developers. (It has also a nice benefit that IDE (like VS code) will display the prompt properly (not trying to parse it as markdown, and thus not showing new lines properly)) """ pass

여기서 role은 model 네이티브 role입니다 (chatGPT의 경우 assistant, user, system)



# Optional sections
- prompt의 전체 section을 optional로 정의할 수 있습니다
- section의 input이 누락되면 전체 section이 렌더링되지 않습니다

구문은 다음과 같습니다:

``` python
@llm_prompt
def prompt_with_optional_partials():
    """
    this text will be rendered always, but

    {? anything inside this block will be rendered only if all the {value}s parameters are not empty (None | "")   ?}

    you can also place it in between the words
    this too will be rendered{? , but
        this  block will be rendered only if {this_value} and {this_value}
        is not empty?} !
    """

Output parsers

  • llm_prompt decorator는 기본적으로 output type을 기반으로 최적의 output parser를 자동으로 감지하려고 시도합니다. (설정되지 않은 경우 raw string을 반환)
  • list, dict 및 pydantic output도 기본적으로 지원됩니다 (자동으로)
# this code example is complete and should run as it is

from langchain_decorators import llm_prompt

@llm_prompt
def write_name_suggestions(company_business:str, count:int)->list:
    """ Write me {count} good name suggestions for company that {company_business}
    """
    pass

write_name_suggestions(company_business="sells cookies", count=5)

더 복잡한 구조

dict / pydantic의 경우 formatting instructions를 지정해야 합니다… 이것은 지루할 수 있으므로 output parser가 model(pydantic)을 기반으로 instructions를 생성하도록 할 수 있습니다
from langchain_decorators import llm_prompt
from pydantic import BaseModel, Field


class TheOutputStructureWeExpect(BaseModel):
    name:str = Field (description="The name of the company")
    headline:str = Field( description="The description of the company (for landing page)")
    employees:list[str] = Field(description="5-8 fake employee names with their positions")

@llm_prompt()
def fake_company_generator(company_business:str)->TheOutputStructureWeExpect:
    """ Generate a fake company that {company_business}
    {FORMAT_INSTRUCTIONS}
    """
    return

company = fake_company_generator(company_business="sells cookies")

# print the result nicely formatted
print("Company name: ",company.name)
print("company headline: ",company.headline)
print("company employees: ",company.employees)

object에 prompt 바인딩

from pydantic import BaseModel
from langchain_decorators import llm_prompt

class AssistantPersonality(BaseModel):
    assistant_name:str
    assistant_role:str
    field:str

    @property
    def a_property(self):
        return "whatever"

    def hello_world(self, function_kwarg:str=None):
        """
        We can reference any {field} or {a_property} inside our prompt... and combine it with {function_kwarg} in the method
        """


    @llm_prompt
    def introduce_your_self(self)->str:
        """
        ``` <prompt:system>
        You are an assistant named {assistant_name}.
        Your role is to act as {assistant_role}
Introduce your self (in less than 20 words)
""" personality = AssistantPersonality(assistant_name=“John”, assistant_role=“a pirate”) print(personality.introduce_your_self(personality))


# 더 많은 예제:

- 이러한 예제와 몇 가지 추가 예제는 [여기 colab notebook](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=N4cf__D0E2Yk)에서도 확인할 수 있습니다
- 순수하게 langchain decorators를 사용한 [ReAct Agent 재구현](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=3bID5fryE2Yp)도 포함되어 있습니다

---

<Callout icon="pen-to-square" iconType="regular">
    [Edit the source of this page on GitHub.](https://github.com/langchain-ai/docs/edit/main/src/oss/python/integrations/providers/langchain_decorators.mdx)
</Callout>
<Tip icon="terminal" iconType="regular">
    [Connect these docs programmatically](/use-these-docs) to Claude, VSCode, and more via MCP for    real-time answers.
</Tip>
I