Datasets:
ai-forever
commited on
Commit
•
798d471
1
Parent(s):
40fafde
Update README.md
Browse files
README.md
CHANGED
@@ -608,6 +608,10 @@ tags:
|
|
608 |
|
609 |
LIBRA (Long Input Benchmark for Russian Analysis) is designed to evaluate the capabilities of large language models (LLMs) in understanding and processing long texts in Russian. This benchmark includes 21 datasets adapted for different tasks and complexities. The tasks are divided into four complexity groups and allow evaluation across various context lengths ranging from 4,000 up to 128,000 tokens.
|
610 |
|
|
|
|
|
|
|
|
|
611 |
## Tasks and Complexity Groups
|
612 |
|
613 |
### Group I: Simple Information Retrieval
|
@@ -618,26 +622,24 @@ LIBRA (Long Input Benchmark for Russian Analysis) is designed to evaluate the ca
|
|
618 |
- **MatreshkaNames**: Identify the person in dialogues based on the discussed topic.
|
619 |
- **MatreshkaYesNo**: Indicate whether a specific topic was mentioned in the dialog.
|
620 |
- **LibrusecHistory**: Answer questions based on historical texts.
|
621 |
-
- **ruTREC**: Few-shot in-context learning for topic classification. Created by translating the TREC dataset from LongBench.
|
622 |
-
- **ruSciFi**: Answer true/false based on context and general world knowledge. Translation of SciFi dataset from L-Eval.
|
623 |
- **ruSciAbstractRetrieval**: Retrieve relevant paragraphs from scientific abstracts.
|
624 |
-
- **ruTPO**: Multiple-choice questions similar to TOEFL exams. Translation of the TPO dataset from L-Eval.
|
625 |
-
- **ruQuALITY**: Multiple-choice QA tasks based on detailed texts. Created by translating the QuALITY dataset from L-Eval.
|
626 |
|
627 |
### Group III: Multi-hop Question Answering
|
628 |
- **ruBABILongQA**: 5 long-context reasoning tasks for QA using facts hidden among irrelevant information.
|
629 |
- **LongContextMultiQ**: Multi-hop QA based on Wikidata and Wikipedia.
|
630 |
- **LibrusecMHQA**: Multi-hop QA requiring information distributed across several text parts.
|
631 |
-
- **ru2WikiMultihopQA**: Translation of the 2WikiMultihopQA dataset from LongBench.
|
632 |
|
633 |
### Group IV: Complex Reasoning and Mathematical Problems
|
634 |
- **ruSciPassageCount**: Count unique paragraphs in a long text.
|
635 |
-
- **ruQasper**: Question Answering over academic research papers. Created by translating the Qasper dataset from LongBench.
|
636 |
- **ruGSM100**: Solve math problems using Chain-of-Thought reasoning.
|
637 |
|
638 |
-
## Dataset Structure
|
639 |
|
640 |
-
The datasets are divided into subsets based on context lengths: 4k, 8k, 16k, 32k, 64k, and 128k tokens. Each subset contains a different number of samples depending on the task complexity.
|
641 |
|
642 |
## Usage
|
643 |
|
@@ -647,12 +649,14 @@ The LIBRA benchmark is available under the MIT license. Researchers and develope
|
|
647 |
|
648 |
_TODO_
|
649 |
|
|
|
650 |
@article{LIBRA2024,
|
651 |
title={Long Input Benchmark for Russian Analysis},
|
652 |
author={Anonymous},
|
653 |
journal={ACL},
|
654 |
year={2024}
|
655 |
}
|
|
|
656 |
|
657 |
## License
|
658 |
|
@@ -660,6 +664,4 @@ The datasets are published under the MIT license.
|
|
660 |
|
661 |
## Acknowledgments
|
662 |
|
663 |
-
|
664 |
-
|
665 |
-
For more details and code, please visit our [GitHub repository](#).
|
|
|
608 |
|
609 |
LIBRA (Long Input Benchmark for Russian Analysis) is designed to evaluate the capabilities of large language models (LLMs) in understanding and processing long texts in Russian. This benchmark includes 21 datasets adapted for different tasks and complexities. The tasks are divided into four complexity groups and allow evaluation across various context lengths ranging from 4,000 up to 128,000 tokens.
|
610 |
|
611 |
+
## Dataset Structure
|
612 |
+
|
613 |
+
The datasets are divided into subsets based on context lengths: 4k, 8k, 16k, 32k, 64k, and 128k tokens. Each subset contains a different number of samples depending on the task complexity.
|
614 |
+
|
615 |
## Tasks and Complexity Groups
|
616 |
|
617 |
### Group I: Simple Information Retrieval
|
|
|
622 |
- **MatreshkaNames**: Identify the person in dialogues based on the discussed topic.
|
623 |
- **MatreshkaYesNo**: Indicate whether a specific topic was mentioned in the dialog.
|
624 |
- **LibrusecHistory**: Answer questions based on historical texts.
|
625 |
+
- **ruTREC**: Few-shot in-context learning for topic classification. Created by translating the TREC dataset from [LongBench](https://github.com/THUDM/LongBench).
|
626 |
+
- **ruSciFi**: Answer true/false based on context and general world knowledge. Translation of SciFi dataset from [L-Eval](https://github.com/OpenLMLab/LEval).
|
627 |
- **ruSciAbstractRetrieval**: Retrieve relevant paragraphs from scientific abstracts.
|
628 |
+
- **ruTPO**: Multiple-choice questions similar to TOEFL exams. Translation of the TPO dataset from [L-Eval](https://github.com/OpenLMLab/LEval).
|
629 |
+
- **ruQuALITY**: Multiple-choice QA tasks based on detailed texts. Created by translating the QuALITY dataset from [L-Eval](https://github.com/OpenLMLab/LEval).
|
630 |
|
631 |
### Group III: Multi-hop Question Answering
|
632 |
- **ruBABILongQA**: 5 long-context reasoning tasks for QA using facts hidden among irrelevant information.
|
633 |
- **LongContextMultiQ**: Multi-hop QA based on Wikidata and Wikipedia.
|
634 |
- **LibrusecMHQA**: Multi-hop QA requiring information distributed across several text parts.
|
635 |
+
- **ru2WikiMultihopQA**: Translation of the 2WikiMultihopQA dataset from [LongBench](https://github.com/THUDM/LongBench).
|
636 |
|
637 |
### Group IV: Complex Reasoning and Mathematical Problems
|
638 |
- **ruSciPassageCount**: Count unique paragraphs in a long text.
|
639 |
+
- **ruQasper**: Question Answering over academic research papers. Created by translating the Qasper dataset from [LongBench](https://github.com/THUDM/LongBench).
|
640 |
- **ruGSM100**: Solve math problems using Chain-of-Thought reasoning.
|
641 |
|
|
|
642 |
|
|
|
643 |
|
644 |
## Usage
|
645 |
|
|
|
649 |
|
650 |
_TODO_
|
651 |
|
652 |
+
```
|
653 |
@article{LIBRA2024,
|
654 |
title={Long Input Benchmark for Russian Analysis},
|
655 |
author={Anonymous},
|
656 |
journal={ACL},
|
657 |
year={2024}
|
658 |
}
|
659 |
+
```
|
660 |
|
661 |
## License
|
662 |
|
|
|
664 |
|
665 |
## Acknowledgments
|
666 |
|
667 |
+
For more details and code, please visit our [GitHub repository](https://github.com/ai-forever/LIBRA/).
|
|
|
|