Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ while been comparable to it across different benchmarks. You can use it as a dro
|
|
9 |
|
10 |
mera-mix-4x7B achieves 76.37 on the openLLM eval v/s 72.7 by Mixtral-8x7B (as shown [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mistralai__Mixtral-8x7B-Instruct-v0.1)).
|
11 |
|
12 |
-
|
13 |
|
14 |
| Model | ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|Average|
|
15 |
|-------------------------------------------------------------|----:|--------:|----:|---------:|---------:|----:|------:|
|
|
|
9 |
|
10 |
mera-mix-4x7B achieves 76.37 on the openLLM eval v/s 72.7 by Mixtral-8x7B (as shown [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mistralai__Mixtral-8x7B-Instruct-v0.1)).
|
11 |
|
12 |
+
## OpenLLM Eval
|
13 |
|
14 |
| Model | ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|Average|
|
15 |
|-------------------------------------------------------------|----:|--------:|----:|---------:|---------:|----:|------:|
|