사용 가능한 모델 목록은 여기에서 확인할 수 있습니다.

Installation 및 setup

requirements 설치
pip install -U langchain-community
library import
import requests
from langchain_community.embeddings import JinaEmbeddings
from numpy import dot
from numpy.linalg import norm
from PIL import Image

JinaAI API를 통해 Jina embedding model로 텍스트 및 쿼리 embed하기

text_embeddings = JinaEmbeddings(
    jina_api_key="jina_*", model_name="jina-embeddings-v2-base-en"
)
text = "This is a test document."
query_result = text_embeddings.embed_query(text)
print(query_result)
doc_result = text_embeddings.embed_documents([text])
print(doc_result)

JinaAI API를 통해 Jina CLIP으로 이미지 및 쿼리 embed하기

multimodal_embeddings = JinaEmbeddings(jina_api_key="jina_*", model_name="jina-clip-v1")
image = "https://avatars.githubusercontent.com/u/126733545?v=4"

description = "Logo of a parrot and a chain on green background"

im = Image.open(requests.get(image, stream=True).raw)
print("Image:")
display(im)
image_result = multimodal_embeddings.embed_images([image])
print(image_result)
description_result = multimodal_embeddings.embed_documents([description])
print(description_result)
cosine_similarity = dot(image_result[0], description_result[0]) / (
    norm(image_result[0]) * norm(description_result[0])
)
print(cosine_similarity)

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.
I