Sentence Similarity
sentence-transformers
PyTorch
ONNX
Safetensors
OpenVINO
English
bert
mteb
Sentence Transformers
Eval Results (legacy)
text-embeddings-inference
Instructions to use intfloat/e5-large-v2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use intfloat/e5-large-v2 with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("intfloat/e5-large-v2") sentences = [ "That is a happy person", "That is a happy dog", "That is a very happy person", "Today is a sunny day" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Inference
- Notebooks
- Google Colab
- Kaggle
Single input vs Multiple inputs
#13
by innovationTony - opened
Providing a sentence for embedding in a single input or in a list of string with other sentences is returning different results - is this expected behaviour?
I have set normalize_embeddings=True.
@innovationTony No, this is unexpected. They should be the same, with differences up to some small float number precision errors.
I tested the following code, but could not reproduce your reported issue. Can you provide a short example?
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/e5-large-v2')
input_texts = [
'query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
embeddings1 = model.encode(input_texts, normalize_embeddings=True)
print(embeddings1[0])
embeddings2 = model.encode(input_texts[:1], normalize_embeddings=True)
print(embeddings2[0])
import numpy as np
assert np.allclose(embeddings1[0], embeddings2[0], rtol=1e-4, atol=1e-4)