Spaces:
Running
on
Zero
A newer version of the Gradio SDK is available:
5.9.1
title: Audio Llama
emoji: π
colorFrom: blue
colorTo: indigo
sdk: gradio
app_file: app.py
pinned: false
short_description: generated sound from video/text and search
Based by @
Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis
Ho Kei Cheng, Masato Ishii, Akio Hayakawa, Takashi Shibuya, Alexander Schwing, Yuki Mitsufuji
University of Illinois Urbana-Champaign, Sony AI, and Sony Group Corporation
[Paper (being prepared)] [Project Page]
Note: This repository is still under construction. Single-example inference should work as expected. The training code will be added. Code is subject to non-backward-compatible changes.
Highlight
MMAudio generates synchronized audio given video and/or text inputs. Our key innovation is multimodal joint training which allows training on a wide range of audio-visual and audio-text datasets. Moreover, a synchronization module aligns the generated audio with the video frames.
Results
(All audio from our algorithm MMAudio)
Videos from Sora:
https://github.com/user-attachments/assets/82afd192-0cee-48a1-86ca-bd39b8c8f330
Videos from MovieGen/Hunyuan Video/VGGSound:
https://github.com/user-attachments/assets/29230d4e-21c1-4cf8-a221-c28f2af6d0ca
For more results, visit https://hkchengrex.com/MMAudio/video_main.html.
Installation
We have only tested this on Ubuntu.
Prerequisites
We recommend using a miniforge environment.
- Python 3.8+
- PyTorch 2.5.1+ and corresponding torchvision/torchaudio (pick your CUDA version https://pytorch.org/)
- ffmpeg<7 (this is required by torchaudio, you can install it in a miniforge environment with
conda install -c conda-forge 'ffmpeg<7'
)
Clone our repository:
git clone https://github.com/hkchengrex/MMAudio.git
Install with pip:
cd MMAudio
pip install -e .
(If you encounter the File "setup.py" not found error, upgrade your pip with pip install --upgrade pip)
Pretrained models:
The models will be downloaded automatically when you run the demo script. MD5 checksums are provided in mmaudio/utils/download_utils.py
Model | Download link | File size |
---|---|---|
Flow prediction network, small 16kHz | mmaudio_small_16k.pth | 601M |
Flow prediction network, small 44.1kHz | mmaudio_small_44k.pth | 601M |
Flow prediction network, medium 44.1kHz | mmaudio_medium_44k.pth | 2.4G |
Flow prediction network, large 44.1kHz (recommended) | mmaudio_large_44k.pth | 3.9G |
16kHz VAE | v1-16.pth | 655M |
16kHz BigVGAN vocoder | best_netG.pt | 429M |
44.1kHz VAE | v1-44.pth | 1.2G |
Synchformer visual encoder | synchformer_state_dict.pth | 907M |
The 44.1kHz vocoder will be downloaded automatically.
The expected directory structure (full):
MMAudio
βββ ext_weights
β βββ best_netG.pt
β βββ synchformer_state_dict.pth
β βββ v1-16.pth
β βββ v1-44.pth
βββ weights
β βββ mmaudio_small_16k.pth
β βββ mmaudio_small_44k.pth
β βββ mmaudio_medium_44k.pth
β βββ mmaudio_large_44k.pth
βββ ...
The expected directory structure (minimal, for the recommended model only):
MMAudio
βββ ext_weights
β βββ synchformer_state_dict.pth
β βββ v1-44.pth
βββ weights
β βββ mmaudio_large_44k.pth
βββ ...
Demo
By default, these scripts use the large_44k
model.
In our experiments, inference only takes around 6GB of GPU memory (in 16-bit mode) which should fit in most modern GPUs.
Command-line interface
With demo.py
python demo.py --duration=8 --video=<path to video> --prompt "your prompt"
The output (audio in .flac
format, and video in .mp4
format) will be saved in ./output
.
See the file for more options.
Simply omit the --video
option for text-to-audio synthesis.
The default output (and training) duration is 8 seconds. Longer/shorter durations could also work, but a large deviation from the training duration may result in a lower quality.
Gradio interface
Supports video-to-audio and text-to-audio synthesis.
python gradio_demo.py
Known limitations
- The model sometimes generates undesired unintelligible human speech-like sounds
- The model sometimes generates undesired background music
- The model struggles with unfamiliar concepts, e.g., it can generate "gunfires" but not "RPG firing".
We believe all of these three limitations can be addressed with more high-quality training data.
Training
Work in progress.
Evaluation
Work in progress.
Acknowledgement
Many thanks to:
- Make-An-Audio 2 for the 16kHz BigVGAN pretrained model
- BigVGAN
- Synchformer