Text Classification
Transformers
PyTorch
TensorFlow
JAX
Safetensors
English
roberta
autogenerated-modelcard
text-embeddings-inference
Instructions to use FacebookAI/roberta-large-mnli with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use FacebookAI/roberta-large-mnli with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="FacebookAI/roberta-large-mnli")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("FacebookAI/roberta-large-mnli") model = AutoModelForSequenceClassification.from_pretrained("FacebookAI/roberta-large-mnli") - Inference
- Notebooks
- Google Colab
- Kaggle
Request: DOI
1
#10 opened about 2 years ago
by
proasr
How to finetune this model on RTE, MRPC and SST datasets in GLUE benchmark?
1
#9 opened about 2 years ago
by
zhai1010
Some weights of the model checkpoint at roberta-large-mnli were not used
3
#7 opened over 2 years ago
by
tomhosking
Add evaluation results on the default config and validation_matched split of multi_nli
#6 opened over 2 years ago
by
autoevaluator
Add evaluation results on the default config and validation_matched split of multi_nli
#5 opened over 2 years ago
by
autoevaluator
Add missing files
#4 opened about 3 years ago
by
Xenova
id2label and label2id are incompatible with multi_nli dataset
4
#3 opened about 3 years ago
by
kslnet