|
--- |
|
base_model: |
|
- unsloth/Mistral-Nemo-Instruct-2407 |
|
--- |
|
|
|
Note: This model is no longer the optimal W8A8 quantization, please consider using a better quantization model I made later: |
|
noneUsername/Mistral-Nemo-Instruct-2407-abliterated-W8A8-Dynamic-Per-Token |
|
|
|
vllm (pretrained=/root/autodl-tmp/Mistral-Nemo-Instruct-2407,add_bos_token=true,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto |
|
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr| |
|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:| |
|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.800|± |0.0253| |
|
| | |strict-match | 5|exact_match|↑ |0.784|± |0.0261| |
|
|
|
|
|
vllm (pretrained=/root/autodl-tmp/output,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto |
|
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr| |
|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:| |
|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.792|± |0.0257| |
|
| | |strict-match | 5|exact_match|↑ |0.776|± |0.0264| |
|
|
|
|
|
I found some rules about quantization parameters and achieved better results. |