Update README.md
Browse files
README.md
CHANGED
@@ -47,12 +47,40 @@ dataset_info:
|
|
47 |
download_size: 121089
|
48 |
dataset_size: 186908
|
49 |
configs:
|
50 |
-
- config_name: corpus
|
51 |
-
data_files:
|
52 |
-
- split: train
|
53 |
-
path: corpus/train-*
|
54 |
- config_name: qa
|
55 |
data_files:
|
56 |
- split: train
|
57 |
path: qa/train-*
|
|
|
|
|
|
|
|
|
|
|
58 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
download_size: 121089
|
48 |
dataset_size: 186908
|
49 |
configs:
|
|
|
|
|
|
|
|
|
50 |
- config_name: qa
|
51 |
data_files:
|
52 |
- split: train
|
53 |
path: qa/train-*
|
54 |
+
- config_name: corpus
|
55 |
+
data_files:
|
56 |
+
- split: train
|
57 |
+
path: corpus/train-*
|
58 |
+
|
59 |
---
|
60 |
+
|
61 |
+
## AutoRAG evaluation dataset
|
62 |
+
|
63 |
+
### Made with 2024 LLM resesarch articles (papers)
|
64 |
+
|
65 |
+
This dataset is an example for [AutoRAG](https://github.com/Marker-Inc-Korea/AutoRAG).
|
66 |
+
You can directly use this dataset for optimizng and benchmarking your RAG setup in AutoRAG.
|
67 |
+
|
68 |
+
### How this dataset created?
|
69 |
+
|
70 |
+
This dataset is 100% synthetically generated by GPT-4 and `Marker Inc.` technology.
|
71 |
+
|
72 |
+
At first, we collected 110 latest LLM papers at arxiv.
|
73 |
+
We used `Marker` OCR model to extract texts.
|
74 |
+
And chunk it using MarkdownSplitter and TokenSplitter from Langchain.
|
75 |
+
For more quality, we delete all `References` in the research articles.
|
76 |
+
And then, it randomly select 520 passages from chunked corpus for generating question.
|
77 |
+
At last, our custom pipeline generates various and unique questions with GPT-4.
|
78 |
+
|
79 |
+
|
80 |
+
## Acknowledgements
|
81 |
+
|
82 |
+
This dataset's corpus is originated various LLM related research articles on arixv.
|
83 |
+
Marker Inc. do not have copyright or any rights about corpus content itself.
|
84 |
+
|
85 |
+
Plus, this is a alpha version of our evaluation data generation pipeline without human verification, so its quality might be lower than human-generated dataset.
|
86 |
+
|