Sentence Similarity
sentence-transformers
MLX
bert
mteb
Sentence Transformers
Eval Results (legacy)
text-embeddings-inference
Instructions to use mlx-community/multilingual-e5-small-mlx with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use mlx-community/multilingual-e5-small-mlx with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("mlx-community/multilingual-e5-small-mlx") sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - MLX
How to use mlx-community/multilingual-e5-small-mlx with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir multilingual-e5-small-mlx mlx-community/multilingual-e5-small-mlx
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
multilingual-e5-small-mlx
This model was converted to MLX format from intfloat/multilingual-e5-small.
Refer to the original model card for more details on the model.
Use with mlx
pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model mlx-community/multilingual-e5-small-mlx --prompt "My name is"
- Downloads last month
- 2,964
Hardware compatibility
Log In to add your hardware
Quantized
Spaces using mlx-community/multilingual-e5-small-mlx 11
🥇
mteb/leaderboard_legacy
🥇
SmileXing/leaderboard
🥇
sq66/leaderboard_legacy
🚀
reader-1/1
🥇
shiwan7788/leaderboard-uni
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported73.791
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported37.000
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported67.955
- accuracy on MTEB AmazonCounterfactualClassification (de)test set self-reported71.649
- ap on MTEB AmazonCounterfactualClassification (de)test set self-reported82.119
- f1 on MTEB AmazonCounterfactualClassification (de)test set self-reported69.880
- accuracy on MTEB AmazonCounterfactualClassification (en-ext)test set self-reported75.810
- ap on MTEB AmazonCounterfactualClassification (en-ext)test set self-reported24.469
- f1 on MTEB AmazonCounterfactualClassification (en-ext)test set self-reported63.001
- accuracy on MTEB AmazonCounterfactualClassification (ja)test set self-reported64.186
- ap on MTEB AmazonCounterfactualClassification (ja)test set self-reported15.497
- f1 on MTEB AmazonCounterfactualClassification (ja)test set self-reported52.072
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported88.699
- ap on MTEB AmazonPolarityClassificationtest set self-reported85.270
- f1 on MTEB AmazonPolarityClassificationtest set self-reported88.656
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported44.698