ModelCloud optimized and validated quants that pass/meet strict quality assurance on multiple benchmarks. No one quantize
-
ModelCloud/QwQ-32B-gptqmodel-4bit-vortex-v1
Text Generation • Updated • 2.09k • 8 -
ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2
Text Generation • Updated • 4.07k • 6 -
ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v1
Text Generation • Updated • 192 • 5 -
ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-mlx-v1
Updated • 26 • 3