ysn-rfd/BabyMistral-GGUF
This model was converted to GGUF format from OEvortex/BabyMistral using llama.cpp via the ggml.ai's all-gguf-same-where space.
Refer to the original model card for more details on the model.
β Quantized Models Download List
β¨ Recommended for CPU: Q4_K_M | β‘ Recommended for ARM CPU: Q4_0 | π Best Quality: Q8_0
| π Download | π’ Type | π Notes |
|---|---|---|
| Download | Basic quantization | |
| Download | Small size | |
| Download | Balanced quality | |
| Download | Better quality | |
| Download | Fast on ARM | |
| Download | Fast, recommended | |
| Download | Best balance | |
| Download | Good quality | |
| Download | Balanced | |
| Download | High quality | |
| Download | Very good quality | |
| Download | Fast, best quality | |
| Download | Maximum accuracy |
π‘ Tip: Use F16 for maximum precision when quality is critical
- Downloads last month
- 14
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
8-bit
Model tree for ysn-rfd/BabyMistral-GGUF
Base model
OEvortex/BabyMistral