MN-12B-Lyra-v4-exl2 / README.md
TheMelonGod's picture
Update README.md
266efde verified
|
raw
history blame
3.98 kB
metadata
license: cc-by-nc-4.0
language:
  - en
quantized_by: TheMelonGod
pipeline_tag: text-generation
tags:
  - quantized
  - safetensors
  - exllamav2
  - mistral
base_model:
  - Sao10K/MN-12B-Lyra-v4
base_model_relation: quantized

Orignal Model by: Sao10K
Orignal Model: MN-12B-Lyra-v4

ExLlamaV2 Quantizations:
8.0bpw: 8hb | 6hb
7.5bpw: 8hb | 6hb
7.0bpw: 8hb | 6hb
6.5bpw: 8hb | 6hb
6.0bpw: 8hb | 6hb
5.5bpw: 8hb | 6hb
5.0bpw: 8hb | 6hb
4.5bpw: 8hb | 6hb
4.25bpw: 8hb | 6hb
4.0bpw: 8hb | 6hb
3.75bpw: 8hb | 6hb
3.5bpw: 8hb | 6hb
3.0bpw: 8hb | 6hb
2.75bpw: 8hb | 6hb
2.5bpw: 8hb | 6hb
2.25bpw: 8hb | 6hb
2.0bpw: 8hb | 6hb

Measurement File (Default/built-in calibration dataset was used)

If you need a specific model quantization or a particular bits per weight, please let me know. I’m happy to help quantize lesser-known models.

Your feedback and suggestions are always welcome! They help me improve and make quantizations better for everyone. Special thanks to turboderp for developing the tools that made these quantizations possible. Your contributions are greatly appreciated!