Lin-K76 commited on
Commit
5e2d102
1 Parent(s): 1017899

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -28
README.md CHANGED
@@ -27,13 +27,13 @@ base_model: meta-llama/Meta-Llama-3.1-405B-Instruct
27
  - **Activation quantization:** FP8
28
  - **Intended Use Cases:** Intended for commercial and research use in multiple languages. Similarly to [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct), this models is intended for assistant-like chat.
29
  - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
30
- - **Release Date:** 7/24/2024
31
- - **Version:** 1.0
32
  - **License(s):** [llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE)
33
  - **Model Developers:** Neural Magic
34
 
35
- Quantized version of [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct).
36
- It achieves an average score of 96.67 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 96.93.
37
 
38
  ### Model Optimizations
39
 
@@ -140,7 +140,7 @@ oneshot(
140
 
141
  The model was evaluated on MMLU, ARC-Challenge, GSM-8K, Hellaswag, Winogrande and TruthfulQA.
142
  Evaluation was conducted using the Neural Magic fork of [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness/tree/llama_3.1_instruct) (branch llama_3.1_instruct) and the [vLLM](https://docs.vllm.ai/en/stable/) engine.
143
- This version of the lm-evaluation-harness includes versions of ARC-Challenge and GSM-8K that match the prompting style of [Meta-Llama-3.1-Instruct-evals](https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals).
144
 
145
  ### Accuracy
146
 
@@ -159,41 +159,51 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge and
159
  <tr>
160
  <td>MMLU (5-shot)
161
  </td>
162
- <td>86.25
163
  </td>
164
- <td>86.17
165
  </td>
166
- <td>99.91%
167
  </td>
168
  </tr>
169
  <tr>
170
- <td>ARC Challenge (0-shot)
171
  </td>
172
- <td>96.93
173
  </td>
174
- <td>96.67
175
  </td>
176
- <td>99.73%
177
  </td>
178
  </tr>
179
  <tr>
180
- <td>GSM-8K (CoT, 8-shot, strict-match)
 
 
181
  </td>
182
- <td>96.44
 
 
 
 
 
 
183
  </td>
184
  <td>95.98
185
  </td>
186
- <td>99.52%
 
 
187
  </td>
188
  </tr>
189
  <tr>
190
  <td>Hellaswag (10-shot)
191
  </td>
192
- <td>88.33
193
  </td>
194
- <td>88.34
195
  </td>
196
- <td>100.0%
197
  </td>
198
  </tr>
199
  <tr>
@@ -201,29 +211,29 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge and
201
  </td>
202
  <td>87.21
203
  </td>
204
- <td>87.45
205
  </td>
206
- <td>100.2%
207
  </td>
208
  </tr>
209
  <tr>
210
  <td>TruthfulQA (0-shot, mc2)
211
  </td>
212
- <td>64.64
213
  </td>
214
- <td>64.71
215
  </td>
216
- <td>100.1%
217
  </td>
218
  </tr>
219
  <tr>
220
  <td><strong>Average</strong>
221
  </td>
222
- <td><strong>86.63</strong>
223
  </td>
224
- <td><strong>86.55</strong>
225
  </td>
226
- <td><strong>99.91%</strong>
227
  </td>
228
  </tr>
229
  </table>
@@ -237,12 +247,25 @@ The results were obtained using the following commands:
237
  ```
238
  lm_eval \
239
  --model vllm \
240
- --model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=8 \
241
- --tasks mmlu \
 
 
242
  --num_fewshot 5 \
243
  --batch_size auto
244
  ```
245
 
 
 
 
 
 
 
 
 
 
 
 
246
  #### ARC-Challenge
247
  ```
248
  lm_eval \
 
27
  - **Activation quantization:** FP8
28
  - **Intended Use Cases:** Intended for commercial and research use in multiple languages. Similarly to [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct), this models is intended for assistant-like chat.
29
  - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
30
+ - **Release Date:** 8/22/2024
31
+ - **Version:** 1.1
32
  - **License(s):** [llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE)
33
  - **Model Developers:** Neural Magic
34
 
35
+ Quantized version of [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct) with the updated 8 kv-heads.
36
+ It achieves an average score of 86.86 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 86.79.
37
 
38
  ### Model Optimizations
39
 
 
140
 
141
  The model was evaluated on MMLU, ARC-Challenge, GSM-8K, Hellaswag, Winogrande and TruthfulQA.
142
  Evaluation was conducted using the Neural Magic fork of [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness/tree/llama_3.1_instruct) (branch llama_3.1_instruct) and the [vLLM](https://docs.vllm.ai/en/stable/) engine.
143
+ This version of the lm-evaluation-harness includes versions of ARC-Challenge, GSM-8K, MMLU, and MMLU-cot that match the prompting style of [Meta-Llama-3.1-Instruct-evals](https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals).
144
 
145
  ### Accuracy
146
 
 
159
  <tr>
160
  <td>MMLU (5-shot)
161
  </td>
162
+ <td>87.41
163
  </td>
164
+ <td>87.46
165
  </td>
166
+ <td>100.0%
167
  </td>
168
  </tr>
169
  <tr>
170
+ <td>MMLU-cot (0-shot)
171
  </td>
172
+ <td>88.11
173
  </td>
174
+ <td>88.11
175
  </td>
176
+ <td>100.0%
177
  </td>
178
  </tr>
179
  <tr>
180
+ <td>ARC Challenge (0-shot)
181
+ </td>
182
+ <td>94.97
183
  </td>
184
+ <td>94.97
185
+ </td>
186
+ <td>100.0%
187
+ </td>
188
+ </tr>
189
+ <tr>
190
+ <td>GSM-8K-cot (8-shot, strict-match)
191
  </td>
192
  <td>95.98
193
  </td>
194
+ <td>95.75
195
+ </td>
196
+ <td>99.76%
197
  </td>
198
  </tr>
199
  <tr>
200
  <td>Hellaswag (10-shot)
201
  </td>
202
+ <td>88.54
203
  </td>
204
+ <td>88.45
205
  </td>
206
+ <td>99.90%
207
  </td>
208
  </tr>
209
  <tr>
 
211
  </td>
212
  <td>87.21
213
  </td>
214
+ <td>88.00
215
  </td>
216
+ <td>100.9%
217
  </td>
218
  </tr>
219
  <tr>
220
  <td>TruthfulQA (0-shot, mc2)
221
  </td>
222
+ <td>65.31
223
  </td>
224
+ <td>65.25
225
  </td>
226
+ <td>99.91%
227
  </td>
228
  </tr>
229
  <tr>
230
  <td><strong>Average</strong>
231
  </td>
232
+ <td><strong>86.79</strong>
233
  </td>
234
+ <td><strong>86.60</strong>
235
  </td>
236
+ <td><strong>99.74%</strong>
237
  </td>
238
  </tr>
239
  </table>
 
247
  ```
248
  lm_eval \
249
  --model vllm \
250
+ --model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,max_gen_toks=10,tensor_parallel_size=8 \
251
+ --tasks mmlu_llama_3.1_instruct \
252
+ --apply_chat_template \
253
+ --fewshot_as_multiturn \
254
  --num_fewshot 5 \
255
  --batch_size auto
256
  ```
257
 
258
+ #### MMLU-cot
259
+ ```
260
+ lm_eval \
261
+ --model vllm \
262
+ --model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,max_gen_toks=1024,tensor_parallel_size=8 \
263
+ --tasks mmlu_cot_0shot_llama_3.1_instruct \
264
+ --apply_chat_template \
265
+ --num_fewshot 0 \
266
+ --batch_size auto
267
+ ```
268
+
269
  #### ARC-Challenge
270
  ```
271
  lm_eval \