The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
Distil Whisper: TEDLIUM
This is a variant of the TEDLIUM dataset, augmented to return the pseudo-labelled Whisper Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by labelling the input audio data with the Whisper large-v2 model with greedy sampling. For information on how the original dataset was curated, refer to the original dataset card.
Standalone Usage
First, install the latest version of the 🤗 Datasets package:
pip install --upgrade pip
pip install --upgrade datasets[audio]
The dataset can be downloaded and pre-processed on disk using the load_dataset
function:
from datasets import load_dataset
dataset = load_dataset("distil-whisper/tedlium", "release3")
# take the first sample of the validation set
sample = dataset["validation"][0]
It can also be streamed directly from the Hub using Datasets' streaming mode. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk:
from datasets import load_dataset
dataset = load_dataset("distil-whisper/tedlium", "release3", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the Distil Whisper repository.
License
This dataset is licensed under cc-by-nc-nd-3.0.
- Downloads last month
- 57