|
--- |
|
license: cc-by-4.0 |
|
task_categories: |
|
- text-to-speech |
|
language: |
|
- en |
|
size_categories: |
|
- 10K<n<100K |
|
|
|
configs: |
|
- config_name: dev |
|
data_files: |
|
- split: dev.clean |
|
path: "data/dev.clean/dev.clean*.parquet" |
|
|
|
- config_name: clean |
|
data_files: |
|
- split: dev.clean |
|
path: "data/dev.clean/dev.clean*.parquet" |
|
- split: test.clean |
|
path: "data/test.clean/test.clean*.parquet" |
|
- split: train.clean.100 |
|
path: "data/train.clean.100/train.clean.100*.parquet" |
|
- split: train.clean.360 |
|
path: "data/train.clean.360/train.clean.360*.parquet" |
|
|
|
- config_name: other |
|
data_files: |
|
- split: dev.other |
|
path: "data/dev.other/dev.other*.parquet" |
|
- split: test.other |
|
path: "data/test.other/test.other*.parquet" |
|
- split: train.other.500 |
|
path: "data/train.other.500/train.other.500*.parquet" |
|
|
|
- config_name: all |
|
data_files: |
|
- split: dev.clean |
|
path: "data/dev.clean/dev.clean*.parquet" |
|
- split: dev.other |
|
path: "data/dev.other/dev.other*.parquet" |
|
- split: test.clean |
|
path: "data/test.clean/test.clean*.parquet" |
|
- split: test.other |
|
path: "data/test.other/test.other*.parquet" |
|
- split: train.clean.100 |
|
path: "data/train.clean.100/train.clean.100*.parquet" |
|
- split: train.clean.360 |
|
path: "data/train.clean.360/train.clean.360*.parquet" |
|
- split: train.other.500 |
|
path: "data/train.other.500/train.other.500*.parquet" |
|
--- |
|
# Dataset Card for LibriTTS |
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
|
LibriTTS is a multi-speaker English corpus of approximately 585 hours of read English speech at 24kHz sampling rate, |
|
prepared by Heiga Zen with the assistance of Google Speech and Google Brain team members. The LibriTTS corpus is |
|
designed for TTS research. It is derived from the original materials (mp3 audio files from LibriVox and text files |
|
from Project Gutenberg) of the LibriSpeech corpus. |
|
|
|
## Overview |
|
|
|
This is the LibriTTS dataset, adapted for the `datasets` library. |
|
|
|
## Usage |
|
|
|
### Splits |
|
|
|
There are 7 splits (dots replace dashes from the original dataset, to comply with hf naming requirements): |
|
|
|
- dev.clean |
|
- dev.other |
|
- test.clean |
|
- test.other |
|
- train.clean.100 |
|
- train.clean.360 |
|
- train.other.500 |
|
|
|
### Configurations |
|
|
|
There are 3 configurations, each which limits the splits the `load_dataset()` function will download. |
|
|
|
The default configuration is "all". |
|
|
|
- "dev": only the "dev.clean" split (good for testing the dataset quickly) |
|
- "clean": contains only "clean" splits |
|
- "other": contains only "other" splits |
|
- "all": contains only "all" splits |
|
|
|
### Example |
|
|
|
Loading the `clean` config with only the `train.clean.360` split. |
|
``` |
|
load_dataset("blabble-io/libritts", "clean", split="train.clean.100") |
|
``` |
|
|
|
Streaming is also supported. |
|
``` |
|
load_dataset("blabble-io/libritts", streaming=True) |
|
``` |
|
|
|
### Columns |
|
|
|
``` |
|
{ |
|
"audio": datasets.Audio(sampling_rate=24_000), |
|
"text_normalized": datasets.Value("string"), |
|
"text_original": datasets.Value("string"), |
|
"speaker_id": datasets.Value("string"), |
|
"path": datasets.Value("string"), |
|
"chapter_id": datasets.Value("string"), |
|
"id": datasets.Value("string"), |
|
} |
|
``` |
|
|
|
### Example Row |
|
|
|
``` |
|
{ |
|
'audio': { |
|
'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/5551a515e85b9e463062524539c2e1cb52ba32affe128dffd866db0205248bdd/LibriTTS/dev-clean/3081/166546/3081_166546_000028_000002.wav', |
|
'array': ..., |
|
'sampling_rate': 24000 |
|
}, |
|
'text_normalized': 'How quickly he disappeared!"', |
|
'text_original': 'How quickly he disappeared!"', |
|
'speaker_id': '3081', |
|
'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/5551a515e85b9e463062524539c2e1cb52ba32affe128dffd866db0205248bdd/LibriTTS/dev-clean/3081/166546/3081_166546_000028_000002.wav', |
|
'chapter_id': '166546', |
|
'id': '3081_166546_000028_000002' |
|
} |
|
``` |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
- **License:** CC BY 4.0 |
|
|
|
### Dataset Sources [optional] |
|
|
|
<!-- Provide the basic links for the dataset. --> |
|
|
|
- **Homepage:** https://www.openslr.org/60/ |
|
- **Paper:** https://arxiv.org/abs/1904.02882 |
|
|
|
## Citation |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
``` |
|
@ARTICLE{Zen2019-kz, |
|
title = "{LibriTTS}: A corpus derived from {LibriSpeech} for |
|
text-to-speech", |
|
author = "Zen, Heiga and Dang, Viet and Clark, Rob and Zhang, Yu and |
|
Weiss, Ron J and Jia, Ye and Chen, Zhifeng and Wu, Yonghui", |
|
abstract = "This paper introduces a new speech corpus called |
|
``LibriTTS'' designed for text-to-speech use. It is derived |
|
from the original audio and text materials of the |
|
LibriSpeech corpus, which has been used for training and |
|
evaluating automatic speech recognition systems. The new |
|
corpus inherits desired properties of the LibriSpeech corpus |
|
while addressing a number of issues which make LibriSpeech |
|
less than ideal for text-to-speech work. The released corpus |
|
consists of 585 hours of speech data at 24kHz sampling rate |
|
from 2,456 speakers and the corresponding texts. |
|
Experimental results show that neural end-to-end TTS models |
|
trained from the LibriTTS corpus achieved above 4.0 in mean |
|
opinion scores in naturalness in five out of six evaluation |
|
speakers. The corpus is freely available for download from |
|
http://www.openslr.org/60/.", |
|
month = apr, |
|
year = 2019, |
|
copyright = "http://arxiv.org/licenses/nonexclusive-distrib/1.0/", |
|
archivePrefix = "arXiv", |
|
primaryClass = "cs.SD", |
|
eprint = "1904.02882" |
|
} |
|
``` |