Instructions to use amazon/bort with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use amazon/bort with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="amazon/bort")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("amazon/bort") model = AutoModelForMaskedLM.from_pretrained("amazon/bort") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- f24c45b4948f4ed1100a551405497d6a4ee18da3514523c5fe9c215115ec4a62
- Size of remote file:
- 152 MB
- SHA256:
- 91e6209628d93f11c40e8141aa8ca9bbec473e478a9a9d881e3d1a74d6524fd2
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.