Update README.md
Browse files
README.md
CHANGED
@@ -112,14 +112,6 @@ model-index:
|
|
112 |
|
113 |
See [Report for miscii-1020](https://api.wandb.ai/links/flandrelabs-carnegie-mellon-university/p35vchzx) for more details.
|
114 |
|
115 |
-
|
116 |
-
| Benchmark | Metric | miscii-14b-1028 |
|
117 |
-
|-----------|--------|----------------:|
|
118 |
-
| MMLU-PRO | 5-shot | 61.43|
|
119 |
-
|
120 |
-
$$\large{\text{There's nothing more to Show}}$$
|
121 |
-
|
122 |
-
|
123 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
124 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sthenno-com__miscii-14b-1028)
|
125 |
|
@@ -133,3 +125,4 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
|
|
133 |
|MuSR (0-shot) |12.00|
|
134 |
|MMLU-PRO (5-shot) |46.14|
|
135 |
|
|
|
|
112 |
|
113 |
See [Report for miscii-1020](https://api.wandb.ai/links/flandrelabs-carnegie-mellon-university/p35vchzx) for more details.
|
114 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
115 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
116 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sthenno-com__miscii-14b-1028)
|
117 |
|
|
|
125 |
|MuSR (0-shot) |12.00|
|
126 |
|MMLU-PRO (5-shot) |46.14|
|
127 |
|
128 |
+
$$\large{\text{There's nothing more to Show}}$$
|