이 가이드는 double-texting이 무엇인지에 대한 지식을 전제로 하며, double-texting 개념 가이드에서 이에 대해 배울 수 있습니다. 이 가이드는 double texting에 대한 reject 옵션을 다루며, 이는 에러를 발생시켜 graph의 새로운 run을 거부하고 원래 run이 완료될 때까지 계속 진행합니다. 아래는 reject 옵션을 사용하는 간단한 예제입니다.

Setup

먼저, JS와 CURL model 출력을 출력하기 위한 간단한 helper function을 정의하겠습니다 (Python을 사용하는 경우 이 부분을 건너뛸 수 있습니다):
  • Javascript
  • CURL
function prettyPrint(m) {
  const padded = " " + m['type'] + " ";
  const sepLen = Math.floor((80 - padded.length) / 2);
  const sep = "=".repeat(sepLen);
  const secondSep = sep + (padded.length % 2 ? "=" : "");

  console.log(`${sep}${padded}${secondSep}`);
  console.log("\n\n");
  console.log(m.content);
}
이제 필요한 package들을 import하고 client, assistant, thread를 인스턴스화하겠습니다.
  • Python
  • Javascript
  • CURL
import httpx
from langchain_core.messages import convert_to_messages
from langgraph_sdk import get_client

client = get_client(url=<DEPLOYMENT_URL>)
# Using the graph deployed with the name "agent"
assistant_id = "agent"
thread = await client.threads.create()

Create runs

이제 thread를 실행하고 “reject” 옵션으로 두 번째 run을 시도할 수 있습니다. 이미 run을 시작했기 때문에 실패해야 합니다:
  • Python
  • Javascript
  • CURL
run = await client.runs.create(
    thread["thread_id"],
    assistant_id,
    input={"messages": [{"role": "user", "content": "what's the weather in sf?"}]},
)
try:
    await client.runs.create(
        thread["thread_id"],
        assistant_id,
        input={
            "messages": [{"role": "user", "content": "what's the weather in nyc?"}]
        },
        multitask_strategy="reject",
    )
except httpx.HTTPStatusError as e:
    print("Failed to start concurrent run", e)
출력:
Failed to start concurrent run Client error '409 Conflict' for url 'http://localhost:8123/threads/f9e7088b-8028-4e5c-88d2-9cc9a2870e50/runs'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409

View run results

원래 thread가 실행을 완료했는지 확인할 수 있습니다:
  • Python
  • Javascript
  • CURL
# wait until the original run completes
await client.runs.join(thread["thread_id"], run["run_id"])

state = await client.threads.get_state(thread["thread_id"])

for m in convert_to_messages(state["values"]["messages"]):
    m.pretty_print()
출력:
================================ Human Message =================================

what's the weather in sf?
================================== Ai Message ==================================

[{'id': 'toolu_01CyewEifV2Kmi7EFKHbMDr1', 'input': {'query': 'weather in san francisco'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
tavily_search_results_json (toolu_01CyewEifV2Kmi7EFKHbMDr1)
Call ID: toolu_01CyewEifV2Kmi7EFKHbMDr1
Args:
query: weather in san francisco
================================= Tool Message =================================
Name: tavily_search_results_json

[{"url": "https://www.accuweather.com/en/us/san-francisco/94103/june-weather/347629", "content": "Get the monthly weather forecast for San Francisco, CA, including daily high/low, historical averages, to help you plan ahead."}]
================================== Ai Message ==================================

According to the search results from Tavily, the current weather in San Francisco is:

The average high temperature in San Francisco in June is around 65°F (18°C), with average lows around 54°F (12°C). June tends to be one of the cooler and foggier months in San Francisco due to the marine layer of fog that often blankets the city during the summer months.

Some key points about the typical June weather in San Francisco:

* Mild temperatures with highs in the 60s F and lows in the 50s F
* Foggy mornings that often burn off to sunny afternoons
* Little to no rainfall, as June falls in the dry season
* Breezy conditions, with winds off the Pacific Ocean
* Layers are recommended for changing weather conditions

So in summary, you can expect mild, foggy mornings giving way to sunny but cool afternoons in San Francisco this time of year. The marine layer keeps temperatures moderate compared to other parts of California in June.

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.
I