--- dataset_info: features: - name: id dtype: string - name: channel dtype: string - name: transcript_whisper dtype: string - name: title dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: transcript_sensevoice dtype: string - name: emotion_sensevoice sequence: string - name: event_sensevoice sequence: string - name: c50 dtype: string - name: snr dtype: string - name: speech_duration dtype: string - name: emotion_emotion2vec dtype: string splits: - name: train num_bytes: 544892035865.877 num_examples: 1478373 download_size: 527025543429 dataset_size: 544892035865.877 configs: - config_name: default data_files: - split: train path: data/train-* --- ## An earlier version of the dataset contains some duplicated data, a fix is on the way. Sorry for the inconvenience ## Cantonese Youtube Pseudo-Transcription Dataset - Contains approximately 10k hours of audio sourced from YouTube - Videos are chosen at random, and scraped on a channel basis - Includes news, vlogs, entertainment, stories, health - Columns - `transcript_whisper`: Transcribed using `Scrya/whisper-large-v2-cantonese` with `alvanlii/whisper-small-cantonese` for speculative decoding - `transcript_sensevoice`: Transcribed using `FunAudioLLM/SenseVoiceSmall` - used [OpenCC](https://github.com/BYVoid/OpenCC) to convert to traditional chinese - isolated event tags to `event_sensevoice` - isolated emotion tags to `emotion_sensevoice` - `snr`: Signal-to-noise ratio, extracted from `ylacombe/brouhaha-best` - `c50`: Speech clarity, extracted from `ylacombe/brouhaha-best` - `emotion`: Emotion, extracted from `emotion2vec/emotion2vec_plus_large` - Processing - The full audio is split using [WhisperX](https://github.com/m-bain/whisperX), using `Scrya/whisper-large-v2-cantonese` - Preliminary filtering includes filtering out phrases like: - `liking/subscribing to YouTube channel` - `subtitles by [xxxx]` - Additional filtering is recommended for your own use