fffiloni commited on
Commit
1a0284d
1 Parent(s): 4fee78b

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +2 -2
app.py CHANGED
@@ -53,7 +53,7 @@ description="""
53
  This demo is running on CPU. Offered by Sylvain <a href='https://twitter.com/fffiloni' target='_blank'>@fffiloni</a> • <img id='visitor-badge' alt='visitor badge' src='https://visitor-badge.glitch.me/badge?page_id=gradio-blocks.whisper-to-stable-diffusion' style='display: inline-block' /><br />
54
  Record an audio description of an image, stop recording, then hit the Submit button to get 2 images from Stable Diffusion.<br />
55
  Your audio will be translated to English through OpenAI's Whisper, then sent as a prompt to Stable Diffusion.
56
- Try it in French ! ;)
57
 
58
  </p>
59
  """
@@ -64,4 +64,4 @@ Whisper is a general-purpose speech recognition model. It is trained on a large
64
  Model by <a href="https://github.com/openai/whisper" style="text-decoration: underline;" target="_blank">OpenAI</a>
65
  </p>
66
  """
67
- gr.Interface(fn=get_images, inputs=audio, outputs=[translated_prompt, gallery], title=title, description=description).queue(max_size=1000).launch(enable_queue=True)
 
53
  This demo is running on CPU. Offered by Sylvain <a href='https://twitter.com/fffiloni' target='_blank'>@fffiloni</a> • <img id='visitor-badge' alt='visitor badge' src='https://visitor-badge.glitch.me/badge?page_id=gradio-blocks.whisper-to-stable-diffusion' style='display: inline-block' /><br />
54
  Record an audio description of an image, stop recording, then hit the Submit button to get 2 images from Stable Diffusion.<br />
55
  Your audio will be translated to English through OpenAI's Whisper, then sent as a prompt to Stable Diffusion.
56
+ Try it in French ! ;)<br />
57
 
58
  </p>
59
  """
 
64
  Model by <a href="https://github.com/openai/whisper" style="text-decoration: underline;" target="_blank">OpenAI</a>
65
  </p>
66
  """
67
+ gr.Interface(fn=get_images, inputs=audio, outputs=[translated_prompt, gallery], title=title, description=description, article=article).queue(max_size=1000).launch(enable_queue=True)