vllm (pretrained=/root/autodl-tmp/Qwen2.5-Coder-14B-Instruct-abliterated,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,gpu_memory_utilization=0.80,max_num_seqs=5), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: 5
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
---|---|---|---|---|---|---|---|---|
gsm8k | 3 | flexible-extract | 5 | exact_match | ↑ | 0.872 | ± | 0.0212 |
strict-match | 5 | exact_match | ↑ | 0.868 | ± | 0.0215 |
vllm (pretrained=/root/autodl-tmp/output91,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,gpu_memory_utilization=0.80,max_num_seqs=5), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: 5
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
---|---|---|---|---|---|---|---|---|
gsm8k | 3 | flexible-extract | 5 | exact_match | ↑ | 0.872 | ± | 0.0212 |
strict-match | 5 | exact_match | ↑ | 0.872 | ± | 0.0212 |
- Downloads last month
- 11
Model tree for noneUsername/Qwen2.5-Coder-14B-Instruct-abliterated-W8A8-Dynamic-Per-Token
Base model
Qwen/Qwen2.5-14B
Finetuned
Qwen/Qwen2.5-Coder-14B
Finetuned
Qwen/Qwen2.5-Coder-14B-Instruct