Datasets:
ai-forever
commited on
Commit
•
cb3caa4
1
Parent(s):
a881df0
Update README.md
Browse files
README.md
CHANGED
@@ -644,11 +644,9 @@ The datasets are divided into subsets based on context lengths: 4k, 8k, 16k, 32k
|
|
644 |
|
645 |
## Metrics
|
646 |
|
647 |
-
|
648 |
|
649 |
-
|
650 |
-
|
651 |
-
- **Overall Score**: For each model, the overall score is calculated by averaging the scores obtained across various tasks and context lengths. This provides a holistic view of the model's performance and its ability to generalize across different types of long-context tasks.
|
652 |
|
653 |
## Citation
|
654 |
|
|
|
644 |
|
645 |
## Metrics
|
646 |
|
647 |
+
We use **Exact Match (EM)** and **F1** metrics in our tests. **EM** metric is used to evaluate the accuracy of the model's responses by comparing the predicted answers to the ground truth. It is particularly effective for tasks where precise matching of responses is critical, such as question answering and retrieval tasks.
|
648 |
|
649 |
+
<img src="table2.png" width="800" />
|
|
|
|
|
650 |
|
651 |
## Citation
|
652 |
|