|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- text2text-generation |
|
- question-answering |
|
language: |
|
- tr |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
|
|
![WikiRAG-TR Icon](docs/WikiRAG_TR.png) |
|
|
|
# Dataset Summary |
|
WikiRAG-TR is a dataset of 6K (5999) question and answer pairs which synthetically created from introduction part of Turkish Wikipedia Articles. The dataset is created to be used for Turkish Retrieval-Augmented Generation (RAG) tasks. |
|
|
|
## Dataset Information |
|
- **Number of Instances**: 5999 (5725 synthetically generated question-answer pairs, 274 augmented negative samples) |
|
- **Dataset Size**: 20.5 MB |
|
- **Language**: Turkish |
|
- **Dataset License**: apache-2.0 |
|
- **Dataset Category**: Text2Text Generation |
|
- **Dataset Domain**: STEM and Social Sciences |
|
|
|
## WikiRAG-TR Pipeline |
|
|
|
The creation of the dataset was accomplished in two main phases, each represented by a separate diagram. |
|
|
|
### Phase 1: Subcategory Collection |
|
![Subcategory collection diagram](docs/collecting_subcategories.png) |
|
|
|
In this initial phase: |
|
1. A curated list of seed categories was decided, including science, technology, engineering, mathematics, physics, chemistry, biology, geology, meteorology, history, social sciences, and more. |
|
2. Using these seed categories, subcategories were recursively gathered from Wikipedia. |
|
- **Recursion depth** was set to 3 and the **number of subcategories** to collect was limited to 100 for each depth layer. |
|
3. For each step, following subcategory types were filtered out: |
|
- Subcategories containing **NSFW words**. |
|
- Subcategories that only contain **lists of items** |
|
- Subcategories used as **templates** |
|
4. Articles from the resulting subcategory list were acquired. |
|
|
|
### Phase 2: Dataset Generation |
|
![Creating WikiRAG-TR Diagram](docs/creating_wikirag_tr.png) |
|
|
|
The second phase involved the following steps: |
|
1. Introduction sections were extracted from the articles gathered in Phase 1. |
|
- If the introduction was **too short** or **too long** (less than 50 or more than 2500 characters), the article was discarded. |
|
- If the introduction contained **NSFW words**, the article was discarded. |
|
- If the introduction contained **equations**, the article was discarded. |
|
- If the introduction section was **empty**, the article was discarded. |
|
2. The filtered introductions were fed into a large language model `(Gemma-2-27B-it)` to generate synthetic question and answer pairs. |
|
3. For each resulting row in the dataset (containing an introduction, question, and answer), the following operations were performed: |
|
- Unrelated contexts (introductions) were gathered from other rows to add false positive retrievals to the context. |
|
- These unrelated contexts were appended to a list. |
|
- The related context was added to this list. (In some cases, the relevant context was omitted to create **negative samples** where the answer indicates the model can't answer the question due to insufficient information. These negative samples were created separately, ensuring all original questions have corresponding answers.) |
|
- The list was shuffled to **randomize the position** of the relevant context. |
|
- The list elements were joined using the '\n' character. |
|
|
|
## Considerations for Using the Data |
|
The generated answers are usually short and concise. This may lead to models trained on this dataset to generate short answers. |
|
Since Wikipedia articles were used to create this dataset, any biases and inaccuracies present in them may also exist in this dataset. |
|
|
|
## Dataset Columns |
|
- `id`: Unique identifier for each row. |
|
- `question`: The question generated by the model. |
|
- `answer`: The answer generated by the model. |
|
- `context`: The augmented context containing both relevant and irrelevant information. |
|
- `is_negative_response`: Indicates whether the answer is a negative response (0: No, 1: Yes). |
|
- `number_of_articles`: The number of article introductions used to create the context. |
|
|
|
# Attributions |
|
<a href="https://www.flaticon.com/free-icons/globe" title="globe icons">Globe icons created by Freepik - Flaticon</a> |
|
|
|
<a href="https://www.flaticon.com/free-icons/search" title="search icons">Search icons created by Freepik - Flaticon</a> |