|
--- |
|
library_name: transformers |
|
tags: |
|
- text-to-speech |
|
- annotation |
|
license: apache-2.0 |
|
language: |
|
- en |
|
- fr |
|
- es |
|
- pt |
|
- pl |
|
- de |
|
- nl |
|
- it |
|
pipeline_tag: text-to-speech |
|
inference: false |
|
datasets: |
|
- facebook/multilingual_librispeech |
|
- parler-tts/libritts_r_filtered |
|
- parler-tts/libritts-r-filtered-speaker-descriptions |
|
- parler-tts/mls_eng |
|
- parler-tts/mls-eng-speaker-descriptions |
|
- ylacombe/mls-annotated |
|
- ylacombe/cml-tts-filtered-annotated |
|
- PHBJT/cml-tts-filtered |
|
--- |
|
|
|
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> |
|
|
|
|
|
# Parler-TTS Mini Multilingual v1.1 |
|
|
|
<a target="_blank" href="https://huggingface.co/spaces/PHBJT/multi_parler_tts"> |
|
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/> |
|
</a> |
|
|
|
**Parler-TTS Mini Multilingual v1.1** is a multilingual extension of [Parler-TTS Mini](https://huggingface.co/parler-tts/parler-tts-mini-v1.1). |
|
|
|
π¨ As compared to [Mini Multilingual v1](https://huggingface.co/parler-tts/parler-tts-mini-multilingual), this version was trained with some consistent speaker names and with better format for descriptions. π¨ |
|
|
|
It is a fine-tuned version, trained on a [cleaned version](https://huggingface.co/datasets/PHBJT/cml-tts-filtered) of [CML-TTS](https://huggingface.co/datasets/ylacombe/cml-tts) and on the non-English version of [Multilingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech). |
|
In all, this represents some 9,200 hours of non-English data. To retain English capabilities, we also added back the [LibriTTS-R English dataset](https://huggingface.co/datasets/parler-tts/libritts_r_filtered), some 580h of high-quality English data. |
|
|
|
**Parler-TTS Mini Multilingual** can speak in 8 European languages: English, French, Spanish, Portuguese, Polish, German, Italian and Dutch. |
|
|
|
Thanks to its **better prompt tokenizer**, it can easily be extended to other languages. This tokenizer has a larger vocabulary and handles byte fallback, which simplifies multilingual training. |
|
|
|
π¨ This work is the result of a collaboration between the **HuggingFace audio team** and the **[Quantum Squadra](https://quantumsquadra.com/) team**. The **[AI4Bharat](https://ai4bharat.iitm.ac.in/) team** also provided advice and assistance in improving tokenization. π¨ |
|
|
|
|
|
## π Quick Index |
|
* [π¨βπ» Installation](#π¨βπ»-installation) |
|
* [π² Using a random voice](#π²-random-voice) |
|
* [π― Using a specific speaker](#π―-using-a-specific-speaker) |
|
* [Motivation](#motivation) |
|
* [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) |
|
|
|
## π οΈ Usage |
|
|
|
π¨Unlike previous versions of Parler-TTS, here we use two tokenizers - one for the prompt and one for the description.π¨ |
|
|
|
### π¨βπ» Installation |
|
|
|
Using Parler-TTS is as simple as "bonjour". Simply install the library once: |
|
|
|
```sh |
|
pip install git+https://github.com/huggingface/parler-tts.git |
|
``` |
|
|
|
### π² Random voice |
|
|
|
**Parler-TTS Mini Multilingual** has been trained to generate speech with features that can be controlled with a simple text prompt, for example: |
|
|
|
```py |
|
import torch |
|
from parler_tts import ParlerTTSForConditionalGeneration |
|
from transformers import AutoTokenizer |
|
import soundfile as sf |
|
|
|
device = "cuda:0" if torch.cuda.is_available() else "cpu" |
|
|
|
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-multilingual-v1.1").to(device) |
|
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-multilingual-v1.1") |
|
description_tokenizer = AutoTokenizer.from_pretrained(model.config.text_encoder._name_or_path) |
|
|
|
prompt = "Salut toi, comment vas-tu aujourd'hui?" |
|
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up." |
|
|
|
input_ids = description_tokenizer(description, return_tensors="pt").input_ids.to(device) |
|
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device) |
|
|
|
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids) |
|
audio_arr = generation.cpu().numpy().squeeze() |
|
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate) |
|
``` |
|
|
|
### π― Using a specific speaker |
|
|
|
To ensure speaker consistency across generations, this checkpoint was also trained on 16 speakers, characterized by name (e.g. Daniel, Christine, Richard, Nicole, ...). |
|
|
|
To take advantage of this, simply adapt your text description to specify which speaker to use: `Daniel's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.` |
|
|
|
```py |
|
import torch |
|
from parler_tts import ParlerTTSForConditionalGeneration |
|
from transformers import AutoTokenizer |
|
import soundfile as sf |
|
|
|
device = "cuda:0" if torch.cuda.is_available() else "cpu" |
|
|
|
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-multilingual-v1.1").to(device) |
|
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-multilingual-v1.1") |
|
description_tokenizer = AutoTokenizer.from_pretrained(model.config.text_encoder._name_or_path) |
|
|
|
prompt = "Salut toi, comment vas-tu aujourd'hui?" |
|
description = "Daniel's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise." |
|
|
|
input_ids = description_tokenizer(description, return_tensors="pt").input_ids.to(device) |
|
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device) |
|
|
|
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids) |
|
audio_arr = generation.cpu().numpy().squeeze() |
|
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate) |
|
``` |
|
|
|
You can choose a speaker from this list: Mark, Jessica, Daniel, Christine, Christopher, Nicole, Richard, Julia, Alex, Natalie, Nicholas, Sophia, Steven, Olivia, Megan and Michelle. |
|
|
|
**Tips**: |
|
* We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming! |
|
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise |
|
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech |
|
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt |
|
|
|
## Motivation |
|
|
|
Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively. |
|
|
|
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models. |
|
Parler-TTS was released alongside: |
|
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model. |
|
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets. |
|
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints. |
|
|
|
## Citation |
|
|
|
If you found this repository useful, please consider citing this work and also the original Stability AI paper: |
|
|
|
``` |
|
@misc{lacombe-etal-2024-parler-tts, |
|
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi}, |
|
title = {Parler-TTS}, |
|
year = {2024}, |
|
publisher = {GitHub}, |
|
journal = {GitHub repository}, |
|
howpublished = {\url{https://github.com/huggingface/parler-tts}} |
|
} |
|
``` |
|
|
|
``` |
|
@misc{lyth2024natural, |
|
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations}, |
|
author={Dan Lyth and Simon King}, |
|
year={2024}, |
|
eprint={2402.01912}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.SD} |
|
} |
|
``` |
|
|
|
## License |
|
|
|
This model is permissively licensed under the Apache 2.0 license. |