These are GGUF quantized versions of mistralai/Mixtral-8x7B-v0.1.
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using wiki.train.raw
.
Some model files above 50GB are split into smaller files. To concatenate them, use the cat
command (on Windows, use PowerShell): cat foo-Q6_K.gguf.* > foo-Q6_K.gguf
- What quant do I need? See https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
- Quant requests? Just open a discussion in the community tabs.
- Downloads last month
- 415