English
yintongl commited on
Commit
49ee5dd
·
verified ·
1 Parent(s): 92edfdd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -43,13 +43,13 @@ python3 main.py \
43
 
44
  ### Evaluate the model
45
 
46
- Install [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness.git) from source, we used the git id 96d185fa6232a5ab685ba7c43e45d1dbb3bb906d
47
 
48
  ```bash
49
  lm_eval --model hf --model_args pretrained="Intel/bloom-7b1-int4-inc",autogptq=True,gptq_use_triton=True --device cuda:0 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu --batch_size 32
50
  ```
51
 
52
- | Metric | FP16 | INT4 |
53
  | -------------- | ------ | ------ |
54
  | Avg. | 0.4732 | 0.4716 |
55
  | mmlu | 0.2638 | 0.2598 |
 
43
 
44
  ### Evaluate the model
45
 
46
+ Install [lm-eval-harness 0.4.2](https://github.com/EleutherAI/lm-evaluation-harness.git) from source.
47
 
48
  ```bash
49
  lm_eval --model hf --model_args pretrained="Intel/bloom-7b1-int4-inc",autogptq=True,gptq_use_triton=True --device cuda:0 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu --batch_size 32
50
  ```
51
 
52
+ | Metric | BF16 | INT4 |
53
  | -------------- | ------ | ------ |
54
  | Avg. | 0.4732 | 0.4716 |
55
  | mmlu | 0.2638 | 0.2598 |