Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ Falcon-RW-1B-Instruct-OpenOrca is a 1B parameter, causal decoder-only model base
|
|
18 |
|
19 |
**π Evaluation Results**
|
20 |
|
21 |
-
Falcon-RW-1B-Instruct-OpenOrca
|
22 |
|
23 |
| Metric | falcon-rw-1b-instruct-openorca | falcon-rw-1b |
|
24 |
|------------|-------------------------------:|-------------:|
|
@@ -27,9 +27,8 @@ Falcon-RW-1B-Instruct-OpenOrca is the #1 ranking model on [Open LLM Leaderboard]
|
|
27 |
| MMLU | 28.77 | 25.28 |
|
28 |
| TruthfulQA | 37.42 | 35.96 |
|
29 |
| Winogrande | 60.69 | 62.04 |
|
30 |
-
| GSM8K |
|
31 |
-
|
|
32 |
-
| **Average**| **35.08** | **32.44** |
|
33 |
|
34 |
**π Motivations**
|
35 |
1. To create a smaller, open-source, instruction-finetuned, ready-to-use model accessible for users with limited computational resources (lower-end consumer GPUs).
|
|
|
18 |
|
19 |
**π Evaluation Results**
|
20 |
|
21 |
+
Falcon-RW-1B-Instruct-OpenOrca was the #1 ranking model (unfortunately not anymore) on [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) in ~1.5B parameters category! A detailed result can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-instruct-openorca).
|
22 |
|
23 |
| Metric | falcon-rw-1b-instruct-openorca | falcon-rw-1b |
|
24 |
|------------|-------------------------------:|-------------:|
|
|
|
27 |
| MMLU | 28.77 | 25.28 |
|
28 |
| TruthfulQA | 37.42 | 35.96 |
|
29 |
| Winogrande | 60.69 | 62.04 |
|
30 |
+
| GSM8K | 3.41 | 0.53 |
|
31 |
+
| **Average**| **37.63** | **37.07** |
|
|
|
32 |
|
33 |
**π Motivations**
|
34 |
1. To create a smaller, open-source, instruction-finetuned, ready-to-use model accessible for users with limited computational resources (lower-end consumer GPUs).
|