Granite-4.0 Family
Collection
22 items
•
Updated
•
2
Maintainer / Publisher: Susant Achary
This repository provides an Apple-Silicon MLX build of IBM Granite-4.0-H-Tiny quantized to 5-bit.
If you need more faithfulness than 3/4-bit but want lower RAM than 6-bit, 5-bit is a strong middle ground—especially for document parsing, structured extraction, and long-context assistants on Mac.
This card documents the MLX 5-bit conversion. See the comparison table below for when to choose 3/4-bit (lower RAM) or 6-bit (highest fidelity).
config.json (MLX), mlx_model*.safetensors (5-bit shards)tokenizer.json, tokenizer_config.jsonmodel_index.json)Target platform: macOS on Apple Silicon (M-series) with Metal/MPS acceleration.
Indicative ranges for a ~7B hybrid MoE LM (actual usage varies by context length and batch size).
| Variant | Typical Peak RAM | Relative Speed | Typical Behavior | When to Choose |
|---|---|---|---|---|
| 2-bit | ~3–4 GB | 🔥🔥🔥🔥 | Smallest, most lossy | Minimal RAM devices; smoke tests |
| 3-bit | ~5–6 GB | 🔥🔥🔥🔥 | Direct, concise | Great default on M1/M2/M3/M4 |
| 4-bit | ~6–7.5 GB | 🔥🔥🔥 | Better detail retention vs 3-bit | If 3-bit misses small details |
| 5-bit (this repo) | ~8–9 GB | 🔥🔥☆ | Higher fidelity, fewer omissions | When you want stronger document/JSON faithfulness without 6-bit RAM |
| 6-bit | ~9.5–11 GB | 🔥🔥 | Highest MLX fidelity | If RAM permits and you need maximum quality |
Rules of thumb
Deterministic generation
python -m mlx_lm.generate \
--model <this-repo-id> \
--prompt "Summarize the following in 5 bullet points:\n<your text>" \
--max-tokens 256 \
--temperature 0.0 \
--device mps \
--seed 0
Base model
ibm-granite/granite-4.0-h-tiny