Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -288,22 +288,17 @@ language:
|
|
288 |
|
289 |
[CML-TTS](https://huggingface.co/datasets/ylacombe/cml-tts) [1] CML-TTS is a recursive acronym for CML-Multi-Lingual-TTS, a Text-to-Speech (TTS) dataset developed at the Center of Excellence in Artificial Intelligence (CEIA) of the Federal University of Goias (UFG). CML-TTS is a dataset comprising audiobooks sourced from the public domain books of Project Gutenberg, read by volunteers from the LibriVox project. The dataset includes recordings in Dutch, German, French, Italian, Polish, Portuguese, and Spanish, all at a sampling rate of 24kHz.
|
290 |
|
291 |
-
This dataset has been filtered by removing all the rows with a Levenshtein score inferior to 0.9.
|
292 |
-
|
293 |
This dataset was used alongside the [LibriTTS-R English dataset](https://huggingface.co/datasets/blabble-io/libritts_r) and the [Non English subset of MLS](https://huggingface.co/datasets/facebook/multilingual_librispeech) to train [Parler-TTS Multilingual [Mini v1.1]((https://huggingface.co/ylacombe/p-m-e)).
|
294 |
A training recipe is available in [the Parler-TTS library](https://github.com/huggingface/parler-tts).
|
295 |
|
296 |
|
297 |
## Motivation
|
298 |
|
299 |
-
This dataset
|
300 |
-
|
|
|
|
|
301 |
|
302 |
-
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
|
303 |
-
Parler-TTS was released alongside:
|
304 |
-
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
|
305 |
-
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
|
306 |
-
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
|
307 |
|
308 |
|
309 |
## Usage
|
@@ -317,8 +312,6 @@ load_dataset("https://huggingface.co/datasets/PHBJT/cml-tts-cleaned-levenshtein"
|
|
317 |
```
|
318 |
|
319 |
|
320 |
-
**Note:** This dataset doesn't actually keep track of the audio column of the original version. You can merge it back to the original dataset using [this script](https://github.com/huggingface/dataspeech/blob/main/scripts/merge_audio_to_metadata.py) from Parler-TTS or, even better, get inspiration from [the training script](https://github.com/huggingface/parler-tts/blob/main/training/run_parler_tts_training.py) of Parler-TTS, that efficiently process multiple annotated datasets.
|
321 |
-
|
322 |
### Dataset Description
|
323 |
|
324 |
- **License:** CC BY 4.0
|
|
|
288 |
|
289 |
[CML-TTS](https://huggingface.co/datasets/ylacombe/cml-tts) [1] CML-TTS is a recursive acronym for CML-Multi-Lingual-TTS, a Text-to-Speech (TTS) dataset developed at the Center of Excellence in Artificial Intelligence (CEIA) of the Federal University of Goias (UFG). CML-TTS is a dataset comprising audiobooks sourced from the public domain books of Project Gutenberg, read by volunteers from the LibriVox project. The dataset includes recordings in Dutch, German, French, Italian, Polish, Portuguese, and Spanish, all at a sampling rate of 24kHz.
|
290 |
|
|
|
|
|
291 |
This dataset was used alongside the [LibriTTS-R English dataset](https://huggingface.co/datasets/blabble-io/libritts_r) and the [Non English subset of MLS](https://huggingface.co/datasets/facebook/multilingual_librispeech) to train [Parler-TTS Multilingual [Mini v1.1]((https://huggingface.co/ylacombe/p-m-e)).
|
292 |
A training recipe is available in [the Parler-TTS library](https://github.com/huggingface/parler-tts).
|
293 |
|
294 |
|
295 |
## Motivation
|
296 |
|
297 |
+
This dataset was filtered to remove problematic samples.
|
298 |
+
In the original dataset, some samples (especially short ones) had incomplete or incorrect transcriptions. To ensure quality, all rows with a Levenshtein similarity ratio below 0.9 were removed.
|
299 |
+
|
300 |
+
**Note on Levenshtein distance:** the Levenshtein distance measures how different two strings are by counting the minimum number of single-character edits (insertions, deletions, or substitutions) needed to transform one string into another.
|
301 |
|
|
|
|
|
|
|
|
|
|
|
302 |
|
303 |
|
304 |
## Usage
|
|
|
312 |
```
|
313 |
|
314 |
|
|
|
|
|
315 |
### Dataset Description
|
316 |
|
317 |
- **License:** CC BY 4.0
|