metadata
dataset_info:
features:
- name: audio_filename
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2229329827
num_examples: 8361492
download_size: 1220920889
dataset_size: 2229329827
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
malaysia-ai/pseudolabel-dialects-youtube-whisper-large-v3
Pseudolabel malaysia-ai/malaysian-dialects-youtube using openai/whisper-large-v3
How to prepare the dataset
huggingface-cli download --repo-type dataset \
--include '*.zip' \
--local-dir './' \
--max-workers 20 \
malaysia-ai/pseudolabel-dialects-youtube-whisper-large-v3
wget https://gist.githubusercontent.com/huseinzol05/2e26de4f3b29d99e993b349864ab6c10/raw/9b2251f3ff958770215d70c8d82d311f82791b78/unzip.py
python3 unzip.py