Edit model card

Note: This model is no longer the optimal W8A8 quantization, please consider using a better quantization model I made later: noneUsername/Mistral-Nemo-Instruct-2407-abliterated-W8A8-Dynamic-Per-Token

vllm (pretrained=/root/autodl-tmp/Mistral-Nemo-Instruct-2407,add_bos_token=true,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match 0.800 ± 0.0253
strict-match 5 exact_match 0.784 ± 0.0261

vllm (pretrained=/root/autodl-tmp/output,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match 0.792 ± 0.0257
strict-match 5 exact_match 0.776 ± 0.0264

I found some rules about quantization parameters and achieved better results.

Downloads last month
3
Safetensors
Model size
12.2B params
Tensor type
BF16
·
I8
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for noneUsername/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-Token-better

Finetuned
(22)
this model