metadata
language:
- en
license: mit
base_model:
- mistralai/Mistral-7B-v0.1
datasets:
- argilla/distilabel-capybara-dpo-7k-binarized
pipeline_tag: text-generation
model-index:
- name: Mistral-ORPO-Capybara-7k
results:
- task:
type: text-generation
dataset:
name: AlpacaEval 2 (LC)
type: AlpacaEval
metrics:
- type: AlpacaEval 2.0
value: 15.88%
name: Win Rate
source:
url: https://tatsu-lab.github.io/alpaca_eval/
name: self-reported
- task:
type: text-generation
dataset:
name: MT-Bench
type: MT-Bench
metrics:
- type: MT-Bench
value: 7.444
name: Score
source:
url: https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/
name: self-reported
quantized_by: bartowski
Llamacpp Quantizations of mistral-orpo-capybara-7k
Using llama.cpp release b2440 for quantization.
Original model: https://huggingface.co/kaist-ai/mistral-orpo-capybara-7k
Download a file (not the whole branch) from below:
Filename | Quant type | File Size | Description |
---|---|---|---|
mistral-orpo-capybara-7k-Q8_0.gguf | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
mistral-orpo-capybara-7k-Q6_K.gguf | Q6_K | 5.94GB | Very high quality, near perfect, recommended. |
mistral-orpo-capybara-7k-Q5_K_M.gguf | Q5_K_M | 5.13GB | High quality, very usable. |
mistral-orpo-capybara-7k-Q5_K_S.gguf | Q5_K_S | 4.99GB | High quality, very usable. |
mistral-orpo-capybara-7k-Q5_0.gguf | Q5_0 | 4.99GB | High quality, older format, generally not recommended. |
mistral-orpo-capybara-7k-Q4_K_M.gguf | Q4_K_M | 4.36GB | Good quality, similar to 4.25 bpw. |
mistral-orpo-capybara-7k-Q4_K_S.gguf | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. |
mistral-orpo-capybara-7k-IQ4_NL.gguf | IQ4_NL | 4.15GB | Good quality, similar to Q4_K_S, new method of quanting, |
mistral-orpo-capybara-7k-IQ4_XS.gguf | IQ4_XS | 3.94GB | Decent quality, new method with similar performance to Q4. |
mistral-orpo-capybara-7k-Q4_0.gguf | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. |
mistral-orpo-capybara-7k-IQ3_M.gguf | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance. |
mistral-orpo-capybara-7k-IQ3_S.gguf | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3 quants. |
mistral-orpo-capybara-7k-Q3_K_L.gguf | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
mistral-orpo-capybara-7k-Q3_K_M.gguf | Q3_K_M | 3.51GB | Even lower quality. |
mistral-orpo-capybara-7k-Q3_K_S.gguf | Q3_K_S | 3.16GB | Low quality, not recommended. |
mistral-orpo-capybara-7k-Q2_K.gguf | Q2_K | 2.71GB | Extremely low quality, not recommended. |
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski