Edit model card

Mistral-7B-OpenOrca-GGUF

Available Quants

  • Q2_K
  • Q3_K_L
  • Q3_K_M
  • Q3_K_S
  • Q4_0
  • Q4_K_M
  • Q4_K_S
  • Q5_0
  • Q5_K_M
  • Q5_K_S
  • Q6_K
  • Q8_0
Downloads last month
437
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for QuantFactory/Mistral-7B-OpenOrca-GGUF

Quantized
(13)
this model

Collection including QuantFactory/Mistral-7B-OpenOrca-GGUF