markoarnauto
commited on
Commit
•
e5ac497
1
Parent(s):
fe76cda
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -25,42 +25,42 @@ curl http://localhost:8000/v1/completions -H "Content-Type: application/json
|
|
25 |
```
|
26 |
|
27 |
## Evaluations
|
28 |
-
| __English__ | __Llama-3 70B Instruct__ | __Llama 3 70B GPTQ__ | __Mixtral Instruct__ |
|
29 |
-
|
30 |
-
| Avg. | 76.19 | 75.14
|
31 |
-
| ARC | 71.6 | 70.7
|
32 |
-
| Hellaswag | 77.3 | 76.4
|
33 |
-
| MMLU | 79.66 | 78.33
|
34 |
-
| | |
|
35 |
-
| __French__ | __Llama-3 70B Instruct__ | __Llama 3 70B GPTQ__ | __Mixtral Instruct__ |
|
36 |
-
| Avg. | 70.97 | 70.27
|
37 |
-
| ARC_fr | 65.0 | 64.7
|
38 |
-
| Hellaswag_fr | 72.4 | 71.4
|
39 |
-
| MMLU_fr | 75.5 | 74.7
|
40 |
-
| | |
|
41 |
-
| __German__ | __Llama-3 70B Instruct__ | __Llama 3 70B GPTQ__ | __Mixtral Instruct__ |
|
42 |
-
| Avg. | 68.43 | 66.93
|
43 |
-
| ARC_de | 64.2 | 62.6
|
44 |
-
| Hellaswag_de | 67.8 | 66.7
|
45 |
-
| MMLU_de | 73.3 | 71.5
|
46 |
-
| | |
|
47 |
-
| __Italian__ | __Llama-3 70B Instruct__ | __Llama 3 70B GPTQ__ | __Mixtral Instruct__ |
|
48 |
-
| Avg. | 70.17 | 68.63
|
49 |
-
| ARC_it | 64.0 | 62.1
|
50 |
-
| Hellaswag_it | 72.6 | 71.0
|
51 |
-
| MMLU_it | 73.9 | 72.8
|
52 |
-
| | |
|
53 |
-
| __Safety__ | __Llama-3 70B Instruct__ | __Llama 3 70B GPTQ__ | __Mixtral Instruct__ |
|
54 |
-
| Avg. | 64.28 | 63.64
|
55 |
-
| RealToxicityPrompts | 97.9 | 98.1
|
56 |
-
| TruthfulQA | 61.91 | 59.91
|
57 |
-
| CrowS | 33.04 | 32.92
|
58 |
-
| | |
|
59 |
-
| __Spanish__ | __Llama-3 70B Instruct__ | __Llama 3 70B GPTQ__ | __Mixtral Instruct__ |
|
60 |
-
| Avg. | 72.5 |
|
61 |
-
| ARC_es | 66.7 |
|
62 |
-
| Hellaswag_es | 75.8 |
|
63 |
-
| MMLU_es | 75 |
|
64 |
|
65 |
Take with caution. We did not check for data contamination.
|
66 |
Evaluation was done using [Eval. Harness](https://github.com/EleutherAI/lm-evaluation-harness) using `limit=1000` for big datasets.
|
|
|
25 |
```
|
26 |
|
27 |
## Evaluations
|
28 |
+
| __English__ | __Llama-3 70B Instruct__ | __Llama 3 70B Instruct GPTQ__ | __Mixtral Instruct__ |
|
29 |
+
|:--------------|:---------------------------|:--------------------------------|:-----------------------|
|
30 |
+
| Avg. | 76.19 | 75.14 | 73.17 |
|
31 |
+
| ARC | 71.6 | 70.7 | 71.0 |
|
32 |
+
| Hellaswag | 77.3 | 76.4 | 77.0 |
|
33 |
+
| MMLU | 79.66 | 78.33 | 71.52 |
|
34 |
+
| | | | |
|
35 |
+
| __French__ | __Llama-3 70B Instruct__ | __Llama 3 70B Instruct GPTQ__ | __Mixtral Instruct__ |
|
36 |
+
| Avg. | 70.97 | 70.27 | 68.7 |
|
37 |
+
| ARC_fr | 65.0 | 64.7 | 63.9 |
|
38 |
+
| Hellaswag_fr | 72.4 | 71.4 | 77.1 |
|
39 |
+
| MMLU_fr | 75.5 | 74.7 | 65.1 |
|
40 |
+
| | | | |
|
41 |
+
| __German__ | __Llama-3 70B Instruct__ | __Llama 3 70B Instruct GPTQ__ | __Mixtral Instruct__ |
|
42 |
+
| Avg. | 68.43 | 66.93 | 66.47 |
|
43 |
+
| ARC_de | 64.2 | 62.6 | 62.8 |
|
44 |
+
| Hellaswag_de | 67.8 | 66.7 | 72.1 |
|
45 |
+
| MMLU_de | 73.3 | 71.5 | 64.5 |
|
46 |
+
| | | | |
|
47 |
+
| __Italian__ | __Llama-3 70B Instruct__ | __Llama 3 70B Instruct GPTQ__ | __Mixtral Instruct__ |
|
48 |
+
| Avg. | 70.17 | 68.63 | 67.17 |
|
49 |
+
| ARC_it | 64.0 | 62.1 | 63.8 |
|
50 |
+
| Hellaswag_it | 72.6 | 71.0 | 75.6 |
|
51 |
+
| MMLU_it | 73.9 | 72.8 | 62.1 |
|
52 |
+
| | | | |
|
53 |
+
| __Safety__ | __Llama-3 70B Instruct__ | __Llama 3 70B Instruct GPTQ__ | __Mixtral Instruct__ |
|
54 |
+
| Avg. | 64.28 | 63.64 | 63.56 |
|
55 |
+
| RealToxicityPrompts | 97.9 | 98.1 | 93.2 |
|
56 |
+
| TruthfulQA | 61.91 | 59.91 | 64.61 |
|
57 |
+
| CrowS | 33.04 | 32.92 | 32.86 |
|
58 |
+
| | | | |
|
59 |
+
| __Spanish__ | __Llama-3 70B Instruct__ | __Llama 3 70B Instruct GPTQ__ | __Mixtral Instruct__ |
|
60 |
+
| Avg. | 72.5 | 71.3 | 68.8 |
|
61 |
+
| ARC_es | 66.7 | 65.7 | 64.4 |
|
62 |
+
| Hellaswag_es | 75.8 | 74 | 77.5 |
|
63 |
+
| MMLU_es | 75 | 74.2 | 64.6 |
|
64 |
|
65 |
Take with caution. We did not check for data contamination.
|
66 |
Evaluation was done using [Eval. Harness](https://github.com/EleutherAI/lm-evaluation-harness) using `limit=1000` for big datasets.
|