ai-forever commited on
Commit
cb3caa4
1 Parent(s): a881df0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -644,11 +644,9 @@ The datasets are divided into subsets based on context lengths: 4k, 8k, 16k, 32k
644
 
645
  ## Metrics
646
 
647
- The benchmark employs a range of evaluation metrics to ensure comprehensive assessment across different tasks and context lengths. These metrics are designed to measure the models' capabilities in handling long-context understanding, reasoning, and extraction of relevant information from extensive texts. Here are the primary metrics used in this benchmark:
648
 
649
- - **Exact Match (EM)**: This metric is used to evaluate the accuracy of the model's responses by comparing the predicted answers to the ground truth. It is particularly effective for tasks where precise matching of responses is critical, such as question answering and retrieval tasks.
650
-
651
- - **Overall Score**: For each model, the overall score is calculated by averaging the scores obtained across various tasks and context lengths. This provides a holistic view of the model's performance and its ability to generalize across different types of long-context tasks.
652
 
653
  ## Citation
654
 
 
644
 
645
  ## Metrics
646
 
647
+ We use **Exact Match (EM)** and **F1** metrics in our tests. **EM** metric is used to evaluate the accuracy of the model's responses by comparing the predicted answers to the ground truth. It is particularly effective for tasks where precise matching of responses is critical, such as question answering and retrieval tasks.
648
 
649
+ <img src="table2.png" width="800" />
 
 
650
 
651
  ## Citation
652