This is mistralai/Mistral-Small-Instruct-2409, converted to GGUF and quantized to q8_0. Both the model and the embedding/output tensors are q8_0.
The model is split using the llama.cpp/llama-gguf-split
CLI utility into shards no larger than 2GB. The purpose of this is to make it less painful to resume downloading if interrupted.
The purpose of this upload is archival.
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.