The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

YAML Metadata Error: "size_categories" must be a string

Dataset Card for Common Voice Corpus 11.0

Dataset Summary

MASC is a dataset that contains 1,000 hours of speech sampled at 16 kHz and crawled from over 700 YouTube channels. The dataset is multi-regional, multi-genre, and multi-dialect intended to advance the research and development of Arabic speech technology with a special emphasis on Arabic speech recognition.

Supported Tasks

  • Automatics Speach Recognition

Languages

Arabic

How to use

The datasets library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the load_dataset function.

from datasets import load_dataset

masc = load_dataset("pain/MASC", split="train")

Using the datasets library, you can also stream the dataset on-the-fly by adding a streaming=True argument to the load_dataset function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.

from datasets import load_dataset

masc = load_dataset("pain/MASC", split="train", streaming=True)

print(next(iter(masc)))

Bonus: create a PyTorch dataloader directly with your own datasets (local/streamed).

Local

from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler

masc = load_dataset("pain/MASC", split="train")
batch_sampler = BatchSampler(RandomSampler(masc), batch_size=32, drop_last=False)
dataloader = DataLoader(masc, batch_sampler=batch_sampler)

Streaming

from datasets import load_dataset
from torch.utils.data import DataLoader

masc = load_dataset("pain/MASC", split="train")
dataloader = DataLoader(masc, batch_size=32)

To find out more about loading and preparing audio datasets, head over to hf.co/blog/audio-datasets.

Example scripts

Train your own CTC or Seq2Seq Automatic Speech Recognition models on MASC with transformers - here.

Dataset Structure

Data Instances

A typical data point comprises the path to the audio file and its sentence.

{'video_id': 'OGqz9G-JO0E', 'start': 770.6, 'end': 781.835, 'duration': 11.24,
'text': 'اللهم من ارادنا وبلادنا وبلاد المسلمين بسوء اللهم فاشغله في نفسه ورد كيده في نحره واجعل تدبيره تدميره يا رب العالمين',
'type': 'c', 'file_path': '87edeceb-5349-4210-89ad-8c3e91e54062_OGqz9G-JO0E.wav',
'audio': {'path': None,
'array': array([
            0.05938721,
            0.0539856,
            0.03460693, ...,
            0.00393677,
            0.01745605,
            0.03045654
        ]), 'sampling_rate': 16000
    }
}

Data Fields

video_id (string): An id for the video that the voice has been created from

start (float64): The start of the audio's chunk

end (float64): The end of the audio's chunk

duration (float64): The duration of the chunk

text (string): The text of the chunk

audio (dict): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0].

type (string): It refers to the data set type, either clean or noisy where "c: clean and n: noisy"

'file_path' (string): A path for the audio chunk

"audio" ("audio"): Audio for the chunk

Data Splits

The speech material has been subdivided into portions for train, dev, test.

The dataset splits has clean and noisy data that can be determined by type field.

Citation Information

@INPROCEEDINGS{10022652,
  author={Al-Fetyani, Mohammad and Al-Barham, Muhammad and Abandah, Gheith and Alsharkawi, Adham and Dawas, Maha},
  booktitle={2022 IEEE Spoken Language Technology Workshop (SLT)}, 
  title={MASC: Massive Arabic Speech Corpus}, 
  year={2023},
  volume={},
  number={},
  pages={1006-1013},
  doi={10.1109/SLT54892.2023.10022652}}
}
Downloads last month
213

Collection including pain/MASC