Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics
Paper β’ 2110.01518 β’ Published
How to use prajjwal1/bert-tiny-mnli with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="prajjwal1/bert-tiny-mnli") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("prajjwal1/bert-tiny-mnli")
model = AutoModelForSequenceClassification.from_pretrained("prajjwal1/bert-tiny-mnli")YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the official Google BERT repository. These BERT variants were introduced in the paper Well-Read Students Learn Better: On the Importance of Pre-training Compact Models. These models are trained on MNLI.
If you use the model, please consider citing the paper
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Original Implementation and more info can be found in this Github repository.
MNLI: 60%
MNLI-mm: 61.61%
These models were trained for 4 epochs.