|
--- |
|
base_model_relation: quantized |
|
quantized_by: Quant-Cartel |
|
base_model: rAIfle/Acolyte-22B |
|
--- |
|
``` |
|
e88 88e d8 |
|
d888 888b 8888 8888 ,"Y88b 888 8e d88 |
|
C8888 8888D 8888 8888 "8" 888 888 88b d88888 |
|
Y888 888P Y888 888P ,ee 888 888 888 888 |
|
"88 88" "88 88" "88 888 888 888 888 |
|
b |
|
8b, |
|
|
|
e88'Y88 d8 888 |
|
d888 'Y ,"Y88b 888,8, d88 ,e e, 888 |
|
C8888 "8" 888 888 " d88888 d88 88b 888 |
|
Y888 ,d ,ee 888 888 888 888 , 888 |
|
"88,d88 "88 888 888 888 "YeeP" 888 |
|
|
|
PROUDLY PRESENTS |
|
``` |
|
# Acolyte-22B-exl2-longcal |
|
|
|
Quantized using 115 rows of 8192 tokens from the default ExLlamav2-calibration dataset. |
|
|
|
Branches: |
|
- `main` -- `measurement.json` |
|
- 8b8h -- 8bpw, 8bit lm_head |
|
- 4.63b6h -- 4.63bpw, 6bit lm_head |
|
- 3.09b6h -- 3.09bpw, 6bit lm_head |
|
- 2.32b6h -- 2.32bpw, 6bit lm_head |
|
|
|
Original model link: [rAIfle/Acolyte-22B](https://huggingface.co/rAIfle/Acolyte-22B) |
|
|
|
Original model README below. |
|
|
|
----- |
|
# Acolyte-22B |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6569a4ed2419be6072890cf8/3dcGMcrWK2-2vQh9QBt3o.png) |
|
|
|
LoRA of a bunch of random datasets on top of Mistral-Small-Instruct-2409, then SLERPed onto base at 0.5. Decent enough for its size. |
|
Check the [LoRA](https://huggingface.co/rAIfle/Acolyte-LORA) for dataset info. |
|
|
|
Use `Mistral V2 & V3` template. |