ve-dot-exe commited on
Commit
d03e414
1 Parent(s): 3b20a3f

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +35 -19
app.py CHANGED
@@ -138,30 +138,46 @@ with gr.Blocks(title="StyleTTS 2", css="footer{display:none !important}", theme=
138
 
139
  [Paper](https://arxiv.org/abs/2306.07691) - [Samples](https://styletts2.github.io/) - [Code](https://github.com/yl4579/StyleTTS2)
140
 
141
- A free demo of StyleTTS 2. **I am not affiliated with the StyleTTS 2 Authors.**
142
 
143
- #### Help this space get to the top of HF's trending list! Please give this space a Like!
 
144
 
145
- **Before using this demo, you agree to inform the listeners that the speech samples are synthesized by the pre-trained models, unless you have the permission to use the voice you synthesize. That is, you agree to only use voices whose speakers grant the permission to have their voice cloned, either directly or by license before making synthesized voices public, or you have to publicly announce that these voices are synthesized if you do not have the permission to use these voices.**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
146
 
147
- Is there a long queue on this space? Duplicate it and add a more powerful GPU to skip the wait! **Note: Thank you to Hugging Face for their generous GPU grant program!**
148
 
149
  **NOTE: StyleTTS 2 does better on longer texts.** For example, making it say "hi" will produce a lower-quality result than making it say a longer phrase.""")
150
- gr.DuplicateButton("Duplicate Space")
151
- gr.HTML("""<script async src="https://www.googletagmanager.com/gtag/js?id=G-KP5GWL8NN5"></script>
152
- <script>
153
- window.dataLayer = window.dataLayer || [];
154
- function gtag(){dataLayer.push(arguments);}
155
- gtag('js', new Date());
156
- gtag('config', 'G-KP5GWL8NN5');
157
- </script>
158
- <script type="text/javascript">
159
- (function(c,l,a,r,i,t,y){
160
- c[a]=c[a]||function(){(c[a].q=c[a].q||[]).push(arguments)};
161
- t=l.createElement(r);t.async=1;t.src="https://www.clarity.ms/tag/"+i;
162
- y=l.getElementsByTagName(r)[0];y.parentNode.insertBefore(t,y);
163
- })(window, document, "clarity", "script", "jydi4lprw6");
164
- </script>""")
165
  # gr.TabbedInterface([vctk, clone, lj, longText], ['Multi-Voice', 'Voice Cloning', 'LJSpeech', 'Long Text [Beta]'])
166
  gr.TabbedInterface([vctk, clone, lj], ['Multi-Voice', 'Voice Cloning', 'LJSpeech', 'Long Text [Beta]'])
167
  gr.Markdown("""
 
138
 
139
  [Paper](https://arxiv.org/abs/2306.07691) - [Samples](https://styletts2.github.io/) - [Code](https://github.com/yl4579/StyleTTS2)
140
 
 
141
 
142
+ ![AdaurisAILogo](https://storage.googleapis.com/ad-auris-django-bucket/media/DALL%C2%B7E%202023-12-11%2001.17.36%20-%20Create%20a%20logo%20redesign%20that%20is%20minimalistic%2C%20incorporating%20cosmic%20and%20AI%20themes.%20The%20design%20should%20feature%20a%20simplified%20version%20of%20the%20'AD%20AURIS'%20bars.png)
143
+ ![Griffin](https://storage.googleapis.com/ad-auris-django-bucket/media/Screenshot%202023-10-13%20at%2012.30.56%20PM.png)
144
 
145
+ ## Ve's cliffnotes on StyleTTS2
146
+
147
+ StyleTTS2 is an advanced text-to-speech (TTS) model that represents a significant evolution in the field of speech synthesis. Developed by Yinghao Aaron Li and his team, it is designed to achieve human-level TTS synthesis by leveraging style diffusion and adversarial training with large speech language models (SLMs). You can find the official GitHub repository for StyleTTS2 here.
148
+
149
+ ### Key Features of StyleTTS2:
150
+ Style Diffusion: StyleTTS2 models styles as latent random variables through diffusion models. This allows the generation of the most suitable style for the text without requiring reference speech. It efficiently implements latent diffusion, benefiting from the diverse speech synthesis capabilities of diffusion models.
151
+
152
+ Adversarial Training with SLMs: The use of large pre-trained SLMs, such as WavLM, as discriminators is a novel aspect. The model incorporates differentiable duration modeling for end-to-end training, which results in improved speech naturalness.
153
+
154
+ Human-Level TTS Synthesis: StyleTTS2 surpasses human recordings on single-speaker datasets and matches them on multispeaker datasets, as judged by native English speakers. When trained on the LibriTTS dataset, it outperforms previous publicly available models for zero-shot speaker adaptation.
155
+
156
+ Training and Inference: The model undergoes an end-to-end training process that jointly optimizes all components. This includes direct waveform synthesis and adversarial training with SLMs. It uses differentiable duration modeling and a non-parametric differentiable upsampler for stability during training.
157
+
158
+ Diverse Speech Generation: The style diffusion approach in StyleTTS2 allows for diverse speech generation without the need for reference audio, a significant improvement over traditional TTS models that often rely on reference speech for expressiveness.
159
+
160
+
161
+ ### What makes the speech generation go at lightening speed:
162
+
163
+ StyleTTS2 incorporates several methods that contribute to faster and more efficient speech generation compared to traditional text-to-speech (TTS) models:
164
+
165
+ End-to-End Training: The end-to-end (E2E) training approach in StyleTTS2 optimizes all components of the TTS system simultaneously. This means that during inference, the system does not rely on separate, fixed components such as pre-trained vocoders for converting mel-spectrograms into waveforms. This integrated approach can lead to faster processing times during both training and inference stages.
166
+
167
+ Non-Autoregressive Framework: StyleTTS2, like its predecessor StyleTTS, is based on a non-autoregressive TTS framework. Non-autoregressive models can generate speech faster than autoregressive models because they do not require the sequential generation of each audio segment. Instead, they can generate multiple parts of the speech simultaneously.
168
+
169
+ Direct Waveform Synthesis: The model employs a modified decoder that directly generates the waveform, as opposed to generating intermediate representations like mel-spectrograms that then need to be converted into audio. This direct approach can reduce the processing time by eliminating the need for an additional vocoding stage.
170
+
171
+ Diffusion Model for Style Sampling: The style diffusion approach in StyleTTS2 allows the model to sample speech styles efficiently. This method can be faster than traditional style encoding techniques, which often require additional processing to capture the style from reference speech.
172
+
173
+ Differentiable Upsampling: The use of differentiable upsampling in StyleTTS2 is designed to be more efficient and stable during training, which can lead to faster model convergence and thus reduce overall training time.
174
+
175
+ Optimized Model Components: The integration of advanced components such as the multi-period discriminator (MPD) and multi-resolution discriminator (MRD), along with efficient loss functions like the LSGAN loss, contributes to more efficient training and potentially faster speech generation.
176
+
177
+ While these methods contribute to the efficiency of StyleTTS2, the actual speed of speech generation can also depend on other factors such as the hardware used, the complexity of the input text, and the specific configuration of the model.
178
 
 
179
 
180
  **NOTE: StyleTTS 2 does better on longer texts.** For example, making it say "hi" will produce a lower-quality result than making it say a longer phrase.""")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
181
  # gr.TabbedInterface([vctk, clone, lj, longText], ['Multi-Voice', 'Voice Cloning', 'LJSpeech', 'Long Text [Beta]'])
182
  gr.TabbedInterface([vctk, clone, lj], ['Multi-Voice', 'Voice Cloning', 'LJSpeech', 'Long Text [Beta]'])
183
  gr.Markdown("""