Edit model card

vllm (pretrained=/root/autodl-tmp/magnum-v4-22b-W8A8,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match 0.844 ± 0.023
strict-match 5 exact_match 0.808 ± 0.025

vllm (pretrained=/root/autodl-tmp/magnum-v4-22b-W8A8,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=float16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match 0.840 ± 0.0232
strict-match 5 exact_match 0.804 ± 0.0252
Downloads last month
7
Safetensors
Model size
22.3B params
Tensor type
BF16
·
I8
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for noneUsername/magnum-v4-22b-W8A8-Dynamic-Per-Token

Quantized
(5)
this model