Bittensor는 Bitcoin과 유사한 마이닝 네트워크로, 마이너들이 컴퓨팅 파워와 지식을 기여하도록 장려하기 위해 설계된 내장 인센티브를 포함하고 있습니다. NIBittensorLLMNeural Internet에서 개발했으며, Bittensor를 기반으로 합니다.
이 LLM은 OpenAI, LLaMA2 등 다양한 AI 모델로 구성된 Bittensor protocol에서 최고의 응답을 제공함으로써 탈중앙화 AI의 진정한 잠재력을 보여줍니다.
사용자는 Validator Endpoint Frontend에서 로그, 요청 및 API key를 확인할 수 있습니다. 그러나 현재 구성 변경은 금지되어 있으며, 그렇지 않으면 사용자의 쿼리가 차단됩니다. 어려움을 겪거나 질문이 있으시면 GitHub, Discord에서 개발자에게 문의하거나 최신 업데이트 및 질문을 위해 discord 서버 Neural Internet에 참여하세요.

NIBittensorLLM의 다양한 Parameter 및 response 처리

import json
from pprint import pprint

from langchain.globals import set_debug
from langchain_community.llms import NIBittensorLLM

set_debug(True)

# System parameter in NIBittensorLLM is optional but you can set whatever you want to perform with model
llm_sys = NIBittensorLLM(
    system_prompt="Your task is to determine response based on user prompt.Explain me like I am technical lead of a project"
)
sys_resp = llm_sys(
    "What is bittensor and What are the potential benefits of decentralized AI?"
)
print(f"Response provided by LLM with system prompt set is : {sys_resp}")

# The top_responses parameter can give multiple responses based on its parameter value
# This below code retrieve top 10 miner's response all the response are in format of json

# Json response structure is
""" {
    "choices":  [
                    {"index": Bittensor's Metagraph index number,
                    "uid": Unique Identifier of a miner,
                    "responder_hotkey": Hotkey of a miner,
                    "message":{"role":"assistant","content": Contains actual response},
                    "response_ms": Time in millisecond required to fetch response from a miner}
                ]
    } """

multi_response_llm = NIBittensorLLM(top_responses=10)
multi_resp = multi_response_llm.invoke("What is Neural Network Feeding Mechanism?")
json_multi_resp = json.loads(multi_resp)
pprint(json_multi_resp)

LLMChain 및 PromptTemplate과 함께 NIBittensorLLM 사용하기

from langchain.chains import LLMChain
from langchain.globals import set_debug
from langchain_community.llms import NIBittensorLLM
from langchain_core.prompts import PromptTemplate

set_debug(True)

template = """Question: {question}

Answer: Let's think step by step."""


prompt = PromptTemplate.from_template(template)

# System parameter in NIBittensorLLM is optional but you can set whatever you want to perform with model
llm = NIBittensorLLM(
    system_prompt="Your task is to determine response based on user prompt."
)

llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What is bittensor?"

llm_chain.run(question)

Conversational Agent 및 Google Search Tool과 함께 NIBittensorLLM 사용하기

from langchain_community.utilities import GoogleSearchAPIWrapper
from langchain.tools import Tool

search = GoogleSearchAPIWrapper()

tool = Tool(
    name="Google Search",
    description="Search Google for recent results.",
    func=search.run,
)
from langchain_classic import hub
from langchain.agents import (
    AgentExecutor,
    create_agent,
)
from langchain.memory import ConversationBufferMemory
from langchain_community.llms import NIBittensorLLM

tools = [tool]

prompt = hub.pull("hwchase17/react")


llm = NIBittensorLLM(
    system_prompt="Your task is to determine a response based on user prompt"
)

memory = ConversationBufferMemory(memory_key="chat_history")

agent = create_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory)

response = agent_executor.invoke({"input": prompt})

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.
I