yang31210999
commited on
Commit
•
11b49f8
1
Parent(s):
12e5537
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,120 @@
|
|
1 |
-
---
|
2 |
-
license: llama3
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama3
|
3 |
+
datasets:
|
4 |
+
- BAAI/Infinity-Instruct
|
5 |
+
base_model:
|
6 |
+
- meta-llama/Meta-Llama-3.1-8B-Instruct
|
7 |
+
---
|
8 |
+
|
9 |
+
We prune the Llama-3.1-8B-Instruct to 1.4B and fine-tune it with LLM-Neo method,which combines LoRA and KD in one. Training data is sampling from BAAI/Infinity-Instruct for 1 Million lines.
|
10 |
+
|
11 |
+
## Benchmarks
|
12 |
+
|
13 |
+
In this section, we report the results for Llama3.1-Neo-1B-100w on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library.
|
14 |
+
|
15 |
+
### Evaluation results
|
16 |
+
|
17 |
+
<table>
|
18 |
+
<tr>
|
19 |
+
<td><strong>Category</strong>
|
20 |
+
</td>
|
21 |
+
<td><strong>Benchmark</strong>
|
22 |
+
</td>
|
23 |
+
<td><strong>Version</strong>
|
24 |
+
</td>
|
25 |
+
<td><strong>n-shot</strong>
|
26 |
+
</td>
|
27 |
+
<td><strong>Metric</strong>
|
28 |
+
</td>
|
29 |
+
<td><strong>Value</strong>
|
30 |
+
</td>
|
31 |
+
<td><strong>Stderr</strong>
|
32 |
+
</td>
|
33 |
+
</tr>
|
34 |
+
<tr>
|
35 |
+
<td rowspan="2" >ARC
|
36 |
+
</td>
|
37 |
+
<td>ARC-Challenge</td>
|
38 |
+
<td>1</td>
|
39 |
+
<td>0</td>
|
40 |
+
<td>acc</td>
|
41 |
+
<td>0.1920</td>
|
42 |
+
<td>± 0.0115</td>
|
43 |
+
</tr>
|
44 |
+
<tr>
|
45 |
+
<td>ARC-Easy</td>
|
46 |
+
<td>1</td>
|
47 |
+
<td>0</td>
|
48 |
+
<td>acc</td>
|
49 |
+
<td>0.3834</td>
|
50 |
+
<td>± 0.0100</td>
|
51 |
+
</tr>
|
52 |
+
<tr>
|
53 |
+
<td rowspan="3" >CEVAL</td>
|
54 |
+
<td>CEVAL (valid)</td>
|
55 |
+
<td>N/A</td>
|
56 |
+
<td>0</td>
|
57 |
+
<td>acc</td>
|
58 |
+
<td>0.2370</td>
|
59 |
+
<td>± 0.0117</td>
|
60 |
+
</tr>
|
61 |
+
<tr>
|
62 |
+
<td>CEVAL (Accountant)</td>
|
63 |
+
<td>1</td>
|
64 |
+
<td>0</td>
|
65 |
+
<td>acc</td>
|
66 |
+
<td>0.2449</td>
|
67 |
+
<td>± 0.0621</td>
|
68 |
+
</tr>
|
69 |
+
<tr>
|
70 |
+
<td>CEVAL (Advanced Mathematics)</td>
|
71 |
+
<td>1</td>
|
72 |
+
<td>0</td>
|
73 |
+
<td>acc</td>
|
74 |
+
<td>0.3158</td>
|
75 |
+
<td>± 0.1096</td>
|
76 |
+
</tr>
|
77 |
+
<tr>
|
78 |
+
<td rowspan="2" >MMLU</td>
|
79 |
+
<td>MMLU</td>
|
80 |
+
<td>N/A</td>
|
81 |
+
<td>0</td>
|
82 |
+
<td>acc</td>
|
83 |
+
<td>0.2439</td>
|
84 |
+
<td>± 0.0036</td>
|
85 |
+
</tr>
|
86 |
+
<tr>
|
87 |
+
<td>MMLU (Abstract Algebra)</td>
|
88 |
+
<td>0</td>
|
89 |
+
<td>0</td>
|
90 |
+
<td>acc</td>
|
91 |
+
<td>0.2500</td>
|
92 |
+
<td>± 0.0435</td>
|
93 |
+
</tr>
|
94 |
+
<tr>
|
95 |
+
<td rowspan="2" >PIQA</td>
|
96 |
+
<td>PIQA</td>
|
97 |
+
<td>1</td>
|
98 |
+
<td>0</td>
|
99 |
+
<td>acc</td>
|
100 |
+
<td>0.5843</td>
|
101 |
+
<td>± 0.0115</td>
|
102 |
+
</tr>
|
103 |
+
<tr>
|
104 |
+
<td>PIQA (Normalized)</td>
|
105 |
+
<td>1</td>
|
106 |
+
<td>0</td>
|
107 |
+
<td>acc_norm</td>
|
108 |
+
<td>0.5822</td>
|
109 |
+
<td>± 0.0115</td>
|
110 |
+
</tr>
|
111 |
+
<tr>
|
112 |
+
<td>Winogrande</td>
|
113 |
+
<td>Winogrande</td>
|
114 |
+
<td>1</td>
|
115 |
+
<td>0</td>
|
116 |
+
<td>acc</td>
|
117 |
+
<td>0.5249</td>
|
118 |
+
<td>± 0.0140</td>
|
119 |
+
</tr>
|
120 |
+
</table>
|