Text Classification
Transformers
Safetensors
English
qwen3
reward
RM
Code
CodeScaler
text-embeddings-inference
Instructions to use LARK-Lab/CodeScaler-1.7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LARK-Lab/CodeScaler-1.7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="LARK-Lab/CodeScaler-1.7B")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("LARK-Lab/CodeScaler-1.7B") model = AutoModelForSequenceClassification.from_pretrained("LARK-Lab/CodeScaler-1.7B") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 5895f06a5929f17d6c44ea74ba97b92dab7698a453c588ee343cb83bebce1797
- Size of remote file:
- 11.4 MB
- SHA256:
- c44ba2b48621ba8c6eddc6a9019079017a72af763260141b0769abf273684121
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.