speechbrainteam
commited on
Commit
•
1399b22
1
Parent(s):
4194025
Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,9 @@ The pre-trained model takes in input a spectrogram and produces a waveform in ou
|
|
21 |
|
22 |
The sampling frequency is 22050 Hz.
|
23 |
|
|
|
|
|
|
|
24 |
|
25 |
## Install SpeechBrain
|
26 |
|
@@ -34,6 +37,7 @@ Please notice that we encourage you to read our tutorials and learn more about
|
|
34 |
|
35 |
### Using the Vocoder
|
36 |
|
|
|
37 |
```python
|
38 |
import torch
|
39 |
from speechbrain.pretrained import HIFIGAN
|
@@ -41,6 +45,51 @@ hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-ljspeech", saved
|
|
41 |
mel_specs = torch.rand(2, 80,298)
|
42 |
waveforms = hifi_gan.decode_batch(mel_specs)
|
43 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
### Using the Vocoder with the TTS
|
45 |
```python
|
46 |
import torchaudio
|
|
|
21 |
|
22 |
The sampling frequency is 22050 Hz.
|
23 |
|
24 |
+
**NOTES**
|
25 |
+
- This vocoder model is trained on a single speaker. Although it has some ability to generalize to different speakers, for better results, we recommend using a multi-speaker vocoder like [this model trained on LibriTTS at 16,000 Hz](https://huggingface.co/speechbrain/tts-hifigan-libritts-16kHz) or [this one trained on LibriTTS at 22,050 Hz](https://huggingface.co/speechbrain/tts-hifigan-libritts-22050Hz).
|
26 |
+
- If you specifically require a vocoder with a 16,000 Hz sampling rate, please follow the provided link above for a suitable option.
|
27 |
|
28 |
## Install SpeechBrain
|
29 |
|
|
|
37 |
|
38 |
### Using the Vocoder
|
39 |
|
40 |
+
- *Basic Usage:*
|
41 |
```python
|
42 |
import torch
|
43 |
from speechbrain.pretrained import HIFIGAN
|
|
|
45 |
mel_specs = torch.rand(2, 80,298)
|
46 |
waveforms = hifi_gan.decode_batch(mel_specs)
|
47 |
```
|
48 |
+
|
49 |
+
- *Convert a Spectrogram into a Waveform:*
|
50 |
+
|
51 |
+
```python
|
52 |
+
import torchaudio
|
53 |
+
from speechbrain.pretrained import HIFIGAN
|
54 |
+
from speechbrain.lobes.models.FastSpeech2 import mel_spectogram
|
55 |
+
|
56 |
+
# Load a pretrained HIFIGAN Vocoder
|
57 |
+
hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-ljspeech", savedir="tmpdir_vocoder")
|
58 |
+
|
59 |
+
# Load an audio file (an example file can be found in this repository)
|
60 |
+
# Ensure that the audio signal is sampled at 22050 Hz; refer to the provided link for a 16 kHz Vocoder.
|
61 |
+
signal, rate = torchaudio.load('speechbrain/tts-hifigan-ljspeech/example.wav')
|
62 |
+
|
63 |
+
# Compute the mel spectrogram.
|
64 |
+
# IMPORTANT: Use these specific parameters to match the Vocoder's training settings for optimal results.
|
65 |
+
spectrogram, _ = mel_spectogram(
|
66 |
+
audio=signal.squeeze(),
|
67 |
+
sample_rate=22050,
|
68 |
+
hop_length=256,
|
69 |
+
win_length=None,
|
70 |
+
n_mels=80,
|
71 |
+
n_fft=1024,
|
72 |
+
f_min=0.0,
|
73 |
+
f_max=8000.0,
|
74 |
+
power=1,
|
75 |
+
normalized=False,
|
76 |
+
min_max_energy_norm=True,
|
77 |
+
norm="slaney",
|
78 |
+
mel_scale="slaney",
|
79 |
+
compression=True
|
80 |
+
)
|
81 |
+
|
82 |
+
# Convert the spectrogram to waveform
|
83 |
+
waveforms = hifi_gan.decode_batch(spectrogram)
|
84 |
+
|
85 |
+
# Save the reconstructed audio as a waveform
|
86 |
+
torchaudio.save('waveform_reconstructed.wav', waveforms.squeeze(1), 22050)
|
87 |
+
|
88 |
+
# If everything is set up correctly, the original and reconstructed audio should be nearly indistinguishable.
|
89 |
+
# Keep in mind that this Vocoder is trained for a single speaker; for multi-speaker Vocoder options, refer to the provided links.
|
90 |
+
|
91 |
+
```
|
92 |
+
|
93 |
### Using the Vocoder with the TTS
|
94 |
```python
|
95 |
import torchaudio
|