--- license: cc-by-4.0 task_categories: - automatic-speech-recognition - text-to-speech language: - vi pretty_name: VLSP 2020 - VinAI - ASR challenge dataset size_categories: - 10K<n<100K dataset_info: features: - name: audio dtype: audio - name: transcription dtype: string splits: - name: train num_bytes: 17159347574.893 num_examples: 56427 download_size: 11649243045 dataset_size: 17159347574.893 configs: - config_name: default data_files: - split: train path: data/train-* --- # unofficial mirror of VLSP 2020 - VinAI - ASR challenge dataset official announcement: - tiếng việt: https://institute.vinbigdata.org/events/vinbigdata-chia-se-100-gio-du-lieu-tieng-noi-cho-cong-dong/ - in eglish: https://institute.vinbigdata.org/en/events/vinbigdata-shares-100-hour-data-for-the-community/ - VLSP 2020 workshop: https://vlsp.org.vn/vlsp2020 official download: https://drive.google.com/file/d/1vUSxdORDxk-ePUt-bUVDahpoXiqKchMx/view?usp=sharing contact: info@vinbigdata.org 100h, 56.4k samples, accuracy 96% pre-process: merge all transcript text files into 1, remove token `<unk>` need to do: check misspelling, restore foreign words phonetised to vietnamese usage with HuggingFace: ```python # pip install -q "datasets[audio]" from datasets import load_dataset from torch.utils.data import DataLoader dataset = load_dataset("doof-ferb/vlsp2020_vinai_100h", split="train", streaming=True) dataset.set_format(type="torch", columns=["audio", "transcription"]) dataloader = DataLoader(dataset, batch_size=4) ```