Instructions to use hyper-accel/tiny-random-internlm2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hyper-accel/tiny-random-internlm2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="hyper-accel/tiny-random-internlm2", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("hyper-accel/tiny-random-internlm2", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- eb88de5faa9e82368b3bee26636575e1c4b57b4e27ceaf70e456ac377aacd8e3
- Size of remote file:
- 1.48 MB
- SHA256:
- f868398fc4e05ee1e8aeba95ddf18ddcc45b8bce55d5093bead5bbf80429b48b
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.