rStar-Coder-Qwen3-0.6B-GGUF
rStar-Coder-Qwen3-0.6B is a compact, multi-domain language model fine-tuned from Qwen-0.6B using the rStar-Coder dataset, which incorporates code expert clusters and an extended symbolic reasoning collection; it excels at unified reasoning across code, mathematics, and science, delivering advanced code generation, algorithm synthesis, multi-language error detection, and step-by-step scientific problem-solving, while supporting structured output in LaTeX, Markdown, JSON, CSV, and YAML—making it ideal for developers, educators, and researchers requiring efficient STEM-oriented AI on mid-range GPUs, offline clusters, and edge devices, with a focus on logic-driven responses and technical data generation rather than general chat or creative writing.
Model Files
File Name | Size | Quant Type |
---|---|---|
rStar-Coder-Qwen3-0.6B.BF16.gguf | 1.2 GB | BF16 |
rStar-Coder-Qwen3-0.6B.F16.gguf | 1.2 GB | F16 |
rStar-Coder-Qwen3-0.6B.F32.gguf | 2.39 GB | F32 |
rStar-Coder-Qwen3-0.6B.Q2_K.gguf | 296 MB | Q2_K |
rStar-Coder-Qwen3-0.6B.Q3_K_L.gguf | 368 MB | Q3_K_L |
rStar-Coder-Qwen3-0.6B.Q3_K_M.gguf | 347 MB | Q3_K_M |
rStar-Coder-Qwen3-0.6B.Q3_K_S.gguf | 323 MB | Q3_K_S |
rStar-Coder-Qwen3-0.6B.Q4_K_M.gguf | 397 MB | Q4_K_M |
rStar-Coder-Qwen3-0.6B.Q4_K_S.gguf | 383 MB | Q4_K_S |
rStar-Coder-Qwen3-0.6B.Q5_K_M.gguf | 444 MB | Q5_K_M |
rStar-Coder-Qwen3-0.6B.Q5_K_S.gguf | 437 MB | Q5_K_S |
rStar-Coder-Qwen3-0.6B.Q6_K.gguf | 495 MB | Q6_K |
rStar-Coder-Qwen3-0.6B.Q8_0.gguf | 639 MB | Q8_0 |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
- Downloads last month
- 574
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
32-bit
Model tree for prithivMLmods/rStar-Coder-Qwen3-0.6B-GGUF
Base model
Qwen/Qwen3-0.6B-Base