dataset_info:
features:
- name: segment_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: dialect
dtype: string
- name: domain
dtype: string
- name: audio_duration
dtype: float64
splits:
- name: test
num_bytes: 1354672655.25
num_examples: 4854
download_size: 1338284576
dataset_size: 1354672655.25
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
license: cc
task_categories:
- audio-classification
language:
- ar
tags:
- dialect
pretty_name: 'MADIS 5: Multi-domain Arabic Dialect Identification in Speech'
size_categories:
- 1K<n<10K

Dataset Overview
MADIS-5 (Multi-domain Arabic Dialect Identification in Speech) is a manually curated dataset designed to facilitate evaluation of cross-domain robustness of Arabic Dialect Identification (ADI) systems. This dataset provides a comprehensive benchmark for testing out-of-domain generalization across different speech domains with diverse recording conditions and speaking styles.
Dataset Statistics
- Total Duration: ~12 hours of speech
- Total Utterances: 4,854 utterances
- Languages/Dialects: 5 major Arabic varieties
- Modern Standard Arabic (MSA)
- Egyptian Arabic
- Gulf Arabic
- Levantine Arabic
- Maghrebi Arabic
- Domains: 4 different spoken domains
- Collection Period: November 2024 - Feb 2025
Data Sources
Our dataset comprises speech samples from four different public sources, each offering varying degrees of similarity to the TV broadcast domain commonly used in ADI research:
📻 Radio Broadcasts
- Source: Local radio stations across the Arab world via radio.garden
- Characteristics: Similar to prior ADI datasets but with more casual, spontaneous speech
- Domain Similarity: High similarity to existing ADI benchmarks
📺 TV Dramas
- Source: Arabic Spoken Dialects Regional Archive (SARA) on Kaggle
- Characteristics: 5-7 second conversational speech segments
- Domain Similarity: Low similarity with more dialogues
🎤 TEDx Talks
- Source: Arabic portion of the TEDx dataset with dialect labels
- Characteristics: Presentations with educational content
- Domain Similarity: Moderate similarity due to topic diversity
🎭 Theater
- Source: YouTube dramatic and comedy plays from various Arab countries
- Characteristics: Theatrical performances spanning different time periods
- Domain Similarity: Low similarity with artistic and performative speech, with occasional poor recording conditions
Annotation Process
Quality Assurance
- Primary Annotator: Native Arabic speaker with PhD in Computational Linguistics and extensive exposure to Arabic language variation
- Verification: Independent verification by a second native Arabic speaker with expertise in Arabic dialects
- Segmentation: Manual segmentation and labeling of all recordings
Inter-Annotator Agreement
- Perfect Agreement: 97.7% of all samples
- Disagreement: 2.3% disagreement on radio broadcast segments (MSA vs. dialect classification)
- Note: The small disagreement reflects the natural continuum between MSA and dialectal Arabic in certain contexts. Final label of segments with disagreement was assigned after a discussion between annotators.
Use Cases
This dataset is ideal for:
- Cross-domain robustness evaluation of Arabic dialect identification systems
- Benchmarking ADI models across diverse speech domains
- Research on domain adaptation in Arabic speech processing
- Development of more robust Arabic dialect classifiers
Dataset Advantages
- Domain Diversity: Four distinct speech domains with varying recording conditions
- Expert Annotation: High-quality labels from linguistic experts
- Cross-domain Focus: Specifically designed to test model robustness beyond single domains
- Real-world Scenarios: Covers authentic speech from various contexts
Citation
If you use this dataset in your research, please cite our paper:
@inproceedings{abdullah2025voice,
title={Voice Conversion Improves Cross-Domain Robustness for Spoken Arabic Dialect Identification},
author={Abdullah, Badr M. and Matthew Baas and Bernd Möbius and Dietrich Klakow},
year={2025},
publisher={Interspeech},
url={arxiv.org/abs/2505.24713}
}
License
Creative Commons Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)
Acknowledgments
We thank the contributors to the source datasets and platforms that made this compilation possible, including radio.garden, SARA archive, and the Multilingual TEDx dataset.