Update README.md
Browse files
README.md
CHANGED
@@ -238,8 +238,8 @@ For most of the task we used Accuracy, as they are framed as Multiple Choice que
|
|
238 |
|
239 |
## **Results**
|
240 |
|
241 |
-
The model was evaluated using the LM Evaluation harness library from Eleuther AI.
|
242 |
-
|
243 |
|
244 |
|
245 |
| Model | Size | XStory | Belebele | BasGLUE | EusProf | EusRead | EusTrivia | EusExams | Avg |
|
|
|
238 |
|
239 |
## **Results**
|
240 |
|
241 |
+
The model was evaluated using the LM Evaluation harness library from Eleuther AI.
|
242 |
+
In order to reproduce our results please follow the instructions in Latxa's [Github repository](https://github.com/hitz-zentroa/latxa?tab=readme-ov-file#evaluation).
|
243 |
|
244 |
|
245 |
| Model | Size | XStory | Belebele | BasGLUE | EusProf | EusRead | EusTrivia | EusExams | Avg |
|