Base models trained on 1T high-quality tokens, demonstrating strong competitiveness among existing SOTA small models (<2B).
ParScale
community
AI & ML interests
None defined yet.
models 67
ParScale/ParScale-1.8B-P1-Inst
Text Generation • 2B • Updated • 5 • 1
ParScale/ParScale-1.8B-P2-Inst
Text Generation • 2B • Updated • 4
ParScale/ParScale-1.8B-P4-Inst
Text Generation • 2B • Updated • 7 • 1
ParScale/ParScale-1.8B-P8-Inst
Text Generation • 2B • Updated • 11 • 2
ParScale/ParScale-1.8B-P1
Text Generation • 2B • Updated • 11 • 1
ParScale/ParScale-1.8B-P2
Text Generation • 2B • Updated • 14
ParScale/ParScale-1.8B-P4
Text Generation • 2B • Updated • 61 • 1
ParScale/ParScale-Qwen-3B-P2-Python
Text Generation • 3B • Updated • 7
ParScale/ParScale-Qwen-3B-P4-Python
Text Generation • 3B • Updated • 9 • 1
ParScale/ParScale-Qwen-3B-P8-Python
Text Generation • 3B • Updated • 8
datasets 0
None public yet