Update README.md
Browse files
README.md
CHANGED
@@ -12,4 +12,17 @@ metrics:
|
|
12 |
|
13 |
This model is based on the LLama 7b model as a backbone, and datasets from various Orcas have been fine-tuned and merged.
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
We'll add the results once we have the official results
|
|
|
12 |
|
13 |
This model is based on the LLama 7b model as a backbone, and datasets from various Orcas have been fine-tuned and merged.
|
14 |
|
15 |
+
|
16 |
+
The three models were combined, and the model with the best ARC and MMLU performance was given the highest weight.
|
17 |
+
|
18 |
+
|
19 |
+
First: fine-tuning beaugogh/openorca-multiplechoice-10k on llama2 7b, but using the NEFTune method.
|
20 |
+
|
21 |
+
|
22 |
+
Second: model fine-tuned with the SlimOrca dataset on llama2 7b.
|
23 |
+
|
24 |
+
Third : Model with beaugogh/openorca-multiplechoice-10k fine-tuned on llama2 7b.
|
25 |
+
|
26 |
+
|
27 |
+
|
28 |
We'll add the results once we have the official results
|