|
--- |
|
library_name: transformers |
|
language: |
|
- en |
|
license: unknown |
|
tags: |
|
- exl2 |
|
--- |
|
|
|
# miqu-1-70b-sf - EXL2 7.0bpw |
|
|
|
This is a 7.0bpw EXL2 quant of [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) |
|
|
|
Details about the model can be found at the above model page. |
|
|
|
|
|
## EXL2 Version |
|
|
|
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. |
|
|
|
If you have problems loading these models, please update Text Generation WebUI to the latest version. |
|
|
|
## Perplexity Scoring |
|
|
|
Below are the perplexity scores for the EXL2 models. A lower score is better. |
|
|
|
| Quant Level | Perplexity Score | |
|
|-------------|------------------| |
|
| 5.0 | 4.2637 | |
|
| 4.5 | 4.2876 | |
|
| 4.0 | 4.3097 | |
|
| 3.5 | 4.4459 | |
|
| 3.0 | 4.6504 | |
|
| 2.75 | 5.1638 | |
|
| 2.5 | 5.1715 | |
|
| 2.25 | 6.0848 | |
|
|
|
## EQ Bench |
|
|
|
Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Mistral, Vicuna-v1.1 and Vicuna-v0 prompt templates. A higher score is better. |
|
|
|
| Quant Size | Instruct Template | Score | |
|
|------------|-------------------|-------| |
|
| 5.0 | ChatML | 79.91 | |
|
| 5.0 | Alpaca | 81.45 | |
|
| 5.0 | Mistral | 81.11 | |
|
| 5.0 | Vicuna-v1.1 | 78.37 | |
|
| 5.0 | Vicuna-v0 | 76.64 | |
|
| 4.5 | ChatML | 80.64 | |
|
| 4.5 | Alpaca | 80.9 | |
|
| 4.5 | Mistral | 81.65 | |
|
| 4.5 | Vicuna-v1.1 | 77.04 | |
|
| 4.5 | Vicuna-v0 | 74.6 | |
|
| 4.0 | ChatML | 80.78 | |
|
| 4.0 | Alpaca | 79.53 | |
|
| 4.0 | Mistral | 82.78 | |
|
| 4.0 | Vicuna-v1.1 | 79.17 | |
|
| 4.0 | Vicuna-v0 | 76.41 | |
|
| 3.5 | ChatML | 81.11 | |
|
| 3.5 | Alpaca | 82.42 | |
|
| 3.5 | Mistral | 82.34 | |
|
| 3.5 | Vicuna-v1.1 | 81.04 | |
|
| 3.5 | Vicuna-v0 | 78.09 | |
|
| 3.0 | ChatML | 79.13 | |
|
| 3.0 | Alpaca | 77.74 | |
|
| 3.0 | Mistral | 80.11 | |
|
| 3.0 | Vicuna-v1.1 | 79.38 | |
|
| 3.0 | Vicuna-v0 | 77.25 | |
|
| 2.75 | ChatML | 79.6 | |
|
| 2.75 | Alpaca | 77.85 | |
|
| 2.75 | Mistral | 79.71 | |
|
| 2.75 | Vicuna-v1.1 | 76.93 | |
|
| 2.75 | Vicuna-v0 | 75.91 | |
|
| 2.5 | ChatML | 77.45 | |
|
| 2.5 | Alpaca | 77.0 | |
|
| 2.5 | Mistral | 78.4 | |
|
| 2.5 | Vicuna-v1.1 | 75.86 | |
|
| 2.5 | Vicuna-v0 | 75.25 | |
|
| 2.25 | ChatML | 77.18 | |
|
| 2.25 | Alpaca | 74.06 | |
|
| 2.25 | Mistral | 76.75 | |
|
| 2.25 | Vicuna-v1.1 | 75.56 | |
|
| 2.25 | Vicuna-v0 | 74.28 | |
|
|
|
|
|
|
|
### Perplexity Script |
|
|
|
This was the script used for perplexity testing. |
|
|
|
```bash |
|
#!/bin/bash |
|
|
|
# Activate the conda environment |
|
source ~/miniconda3/etc/profile.d/conda.sh |
|
conda activate exllamav2 |
|
|
|
# Set the model name and bit size |
|
MODEL_NAME="miqu-1-70b-sf" |
|
BIT_PRECISIONS=(8.0 7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25) |
|
|
|
# Print the markdown table header |
|
echo "| Quant Level | Perplexity Score |" |
|
echo "|-------------|------------------|" |
|
|
|
for BIT_PRECISION in "${BIT_PRECISIONS[@]}" |
|
do |
|
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" |
|
if [ -d "$MODEL_DIR" ]; then |
|
output=$(python test_inference.py -m "$MODEL_DIR" -gs 22,24 -ed data/wikitext/wikitext-2-v1.parquet) |
|
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') |
|
echo "| $BIT_PRECISION | $score |" |
|
fi |
|
done |
|
``` |
|
|
|
|
|
## Quant Details |
|
|
|
This is the script used for quantization. |
|
|
|
```bash |
|
#!/bin/bash |
|
|
|
# Activate the conda environment |
|
source ~/miniconda3/etc/profile.d/conda.sh |
|
conda activate exllamav2 |
|
|
|
# Set the model name and bit size |
|
MODEL_NAME="miqu-1-70b-sf" |
|
|
|
# Define variables |
|
MODEL_DIR="models/152334H_miqu-1-70b-sf" |
|
OUTPUT_DIR="exl2_$MODEL_NAME" |
|
MEASUREMENT_FILE="measurements/$MODEL_NAME.json" |
|
|
|
# Create the measurement file if needed |
|
if [ ! -f "$MEASUREMENT_FILE" ]; then |
|
echo "Creating $MEASUREMENT_FILE" |
|
# Create directories |
|
if [ -d "$OUTPUT_DIR" ]; then |
|
rm -r "$OUTPUT_DIR" |
|
fi |
|
mkdir "$OUTPUT_DIR" |
|
|
|
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE |
|
fi |
|
|
|
# Choose one of the below. Either create a single quant for testing or a batch of them. |
|
# BIT_PRECISIONS=(5.0) |
|
BIT_PRECISIONS=(8.0 7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25) |
|
|
|
for BIT_PRECISION in "${BIT_PRECISIONS[@]}" |
|
do |
|
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" |
|
|
|
# If it doesn't already exist, make the quant |
|
if [ ! -d "$CONVERTED_FOLDER" ]; then |
|
|
|
echo "Creating $CONVERTED_FOLDER" |
|
|
|
# Create directories |
|
if [ -d "$OUTPUT_DIR" ]; then |
|
rm -r "$OUTPUT_DIR" |
|
fi |
|
mkdir "$OUTPUT_DIR" |
|
mkdir "$CONVERTED_FOLDER" |
|
|
|
# Run conversion commands |
|
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER |
|
|
|
fi |
|
done |
|
``` |
|
|