question = "When was George Washington president?"llm_chain.invoke({"question": question})
Ray를 사용하면 비동기 구현 없이도 쿼리를 분산할 수 있습니다. 이는 Anyscale LLM 모델뿐만 아니라 _acall 또는 _agenerate가 구현되지 않은 다른 LangChain LLM 모델에도 적용됩니다.
Copy
prompt_list = [ "When was George Washington president?", "Explain to me the difference between nuclear fission and fusion.", "Give me a list of 5 science fiction books I should read next.", "Explain the difference between Spark and Ray.", "Suggest some fun holiday ideas.", "Tell a joke.", "What is 2+2?", "Explain what is machine learning like I am five years old.", "Explain what is artifical intelligence.",]
Copy
import ray@ray.remote(num_cpus=0.1)def send_query(llm, prompt): resp = llm.invoke(prompt) return respfutures = [send_query.remote(llm, prompt) for prompt in prompt_list]results = ray.get(futures)