Edit model card

Mistral-7B-v0.1-GGUF

Available Quants

  • Q2_K
  • Q3_K_L
  • Q3_K_M
  • Q3_K_S
  • Q4_0
  • Q4_K_M
  • Q4_K_S
  • Q5_0
  • Q5_K_M
  • Q5_K_S
  • Q6_K
  • Q8_0

ReadMe format inspired from mlabonne

Downloads last month
343
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for QuantFactory/Mistral-7B-v0.1-GGUF

Quantized
(168)
this model

Collection including QuantFactory/Mistral-7B-v0.1-GGUF