Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
Arabic
Size:
1K - 10K
License:
metadata
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ar
language_bcp47:
- ar-SA
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: arcd
pretty_name: ARCD
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
config_name: plain_text
splits:
- name: train
num_bytes: 811064
num_examples: 693
- name: validation
num_bytes: 885648
num_examples: 702
download_size: 1942399
dataset_size: 1696712
Dataset Card for "arcd"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://github.com/husseinmozannar/SOQAL/tree/master/data
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 1.85 MB
- Size of the generated dataset: 1.62 MB
- Total amount of disk used: 3.47 MB
Dataset Summary
Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles.
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
plain_text
- Size of downloaded dataset files: 1.85 MB
- Size of the generated dataset: 1.62 MB
- Total amount of disk used: 3.47 MB
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"answers": "{\"answer_start\": [34], \"text\": [\"صحابي من صحابة رسول الإسلام محمد، وعمُّه وأخوه من الرضاعة وأحد وزرائه الأربعة عشر،\"]}...",
"context": "\"حمزة بن عبد المطلب الهاشمي القرشي صحابي من صحابة رسول الإسلام محمد، وعمُّه وأخوه من الرضاعة وأحد وزرائه الأربعة عشر، وهو خير أع...",
"id": "621723207492",
"question": "من هو حمزة بن عبد المطلب؟",
"title": "حمزة بن عبد المطلب"
}
Data Fields
The data fields are the same among all splits.
plain_text
id
: astring
feature.title
: astring
feature.context
: astring
feature.question
: astring
feature.answers
: a dictionary feature containing:text
: astring
feature.answer_start
: aint32
feature.
Data Splits
name | train | validation |
---|---|---|
plain_text | 693 | 702 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@inproceedings{mozannar-etal-2019-neural,
title = "Neural {A}rabic Question Answering",
author = "Mozannar, Hussein and
Maamary, Elie and
El Hajal, Karl and
Hajj, Hazem",
booktitle = "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W19-4612",
doi = "10.18653/v1/W19-4612",
pages = "108--118",
abstract = "This paper tackles the problem of open domain factual Arabic question answering (QA) using Wikipedia as our knowledge source. This constrains the answer of any question to be a span of text in Wikipedia. Open domain QA for Arabic entails three challenges: annotated QA datasets in Arabic, large scale efficient information retrieval and machine reading comprehension. To deal with the lack of Arabic QA datasets we present the Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles, and a machine translation of the Stanford Question Answering Dataset (Arabic-SQuAD). Our system for open domain question answering in Arabic (SOQAL) is based on two components: (1) a document retriever using a hierarchical TF-IDF approach and (2) a neural reading comprehension model using the pre-trained bi-directional transformer BERT. Our experiments on ARCD indicate the effectiveness of our approach with our BERT-based reader achieving a 61.3 F1 score, and our open domain system SOQAL achieving a 27.6 F1 score.",
}
Contributions
Thanks to @albertvillanova, @lewtun, @mariamabarham, @thomwolf, @tayciryahmed for adding this dataset.