Base models trained on 1T high-quality tokens, demonstrating strong competitiveness among existing SOTA small models (<2B).
ParScale
community
AI & ML interests
None defined yet.
models
67

ParScale/ParScale-1.8B-P1-Inst
Text Generation
•
2B
•
Updated
•
74
•
1

ParScale/ParScale-1.8B-P2-Inst
Text Generation
•
2B
•
Updated
•
5

ParScale/ParScale-1.8B-P4-Inst
Text Generation
•
2B
•
Updated
•
16
•
1

ParScale/ParScale-1.8B-P8-Inst
Text Generation
•
2B
•
Updated
•
33
•
2

ParScale/ParScale-1.8B-P1
Text Generation
•
2B
•
Updated
•
4
•
1

ParScale/ParScale-1.8B-P2
Text Generation
•
2B
•
Updated
•
7

ParScale/ParScale-1.8B-P4
Text Generation
•
2B
•
Updated
•
62
•
1

ParScale/ParScale-Qwen-3B-P2-Python
Text Generation
•
3B
•
Updated
•
11

ParScale/ParScale-Qwen-3B-P4-Python
Text Generation
•
3B
•
Updated
•
22

ParScale/ParScale-Qwen-3B-P8-Python
Text Generation
•
3B
•
Updated
•
50
datasets
0
None public yet