language:
- en
license: mit
base_model:
- mistralai/Mistral-7B-v0.1
datasets:
- HuggingFaceH4/ultrafeedback_binarized
pipeline_tag: text-generation
model-index:
- name: Mistral-ORPO-⍺
results:
- task:
type: text-generation
dataset:
name: AlpacaEval 1
type: AlpacaEval
metrics:
- type: AlpacaEval 1.0
value: 87.92%
name: Win Rate
source:
url: https://github.com/tatsu-lab/alpaca_eval
name: self-reported
- task:
type: text-generation
dataset:
name: AlpacaEval 2
type: AlpacaEval
metrics:
- type: AlpacaEval 2.0
value: 11.33%
name: Win Rate
source:
url: https://github.com/tatsu-lab/alpaca_eval
name: self-reported
- task:
type: text-generation
dataset:
name: MT-Bench
type: MT-Bench
metrics:
- type: MT-Bench
value: 7.23
name: Score
source:
url: https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/
name: self-reported
quantized_by: bartowski
Exllama v2 Quantizations of mistral-orpo-alpha
Using turboderp's ExLlamaV2 v0.0.15 for quantization.
The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/kaist-ai/mistral-orpo-alpha
Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
---|---|---|---|---|---|---|
8_0 | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
6_5 | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, recommended. |
5_0 | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
4_25 | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
3_5 | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
Download instructions
With git:
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/mistral-orpo-alpha-exl2 mistral-orpo-alpha-exl2-6_5
With huggingface hub (credit to TheBloke for instructions):
pip3 install huggingface-hub
To download the main
(only useful if you only care about measurement.json) branch to a folder called mistral-orpo-alpha-exl2
:
mkdir mistral-orpo-alpha-exl2
huggingface-cli download bartowski/mistral-orpo-alpha-exl2 --local-dir mistral-orpo-alpha-exl2 --local-dir-use-symlinks False
To download from a different branch, add the --revision
parameter:
Linux:
mkdir mistral-orpo-alpha-exl2-6_5
huggingface-cli download bartowski/mistral-orpo-alpha-exl2 --revision 6_5 --local-dir mistral-orpo-alpha-exl2-6_5 --local-dir-use-symlinks False
Windows (which apparently doesn't like _ in folders sometimes?):
mkdir mistral-orpo-alpha-exl2-6.5
huggingface-cli download bartowski/mistral-orpo-alpha-exl2 --revision 6_5 --local-dir mistral-orpo-alpha-exl2-6.5 --local-dir-use-symlinks False
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski