--- title: EchoMimic emoji: 🐨 colorFrom: pink colorTo: blue sdk: gradio sdk_version: 5.4.0 app_file: webgui.py pinned: false suggested_hardware: a10g-large short_description: Audio-Driven Portrait Animations ---

EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning

Zhiyuan Chen*Jiajiong Cao*Zhiquan ChenYuming LiChenguang Ma
*Equal Contribution.
Terminal Technology Department, Alipay, Ant Group.

## 📣 📣 Updates * [2024.07.17] πŸ”₯πŸ”₯πŸ”₯ Accelerated models and pipe are released. The inference speed can be improved by **10x** (from ~7mins/240frames to ~50s/240frames on V100 GPU) * [2024.07.14] πŸ”₯ [ComfyUI](https://github.com/smthemex/ComfyUI_EchoMimic) is now available. Thanks @smthemex for the contribution. * [2024.07.13] πŸ”₯ Thanks [NewGenAI](https://www.youtube.com/@StableAIHub) for the [video installation tutorial](https://www.youtube.com/watch?v=8R0lTIY7tfI). * [2024.07.13] πŸ”₯ We release our pose&audio driven codes and models. * [2024.07.12] πŸ”₯ WebUI and GradioUI versions are released. We thank @greengerong @Robin021 and @O-O1024 for their contributions. * [2024.07.12] πŸ”₯ Our [paper](https://arxiv.org/abs/2407.08136) is in public on arxiv. * [2024.07.09] πŸ”₯ We release our audio driven codes and models. ## Gallery ### Audio Driven (Sing)
### Audio Driven (English)
### Audio Driven (Chinese)
### Landmark Driven
### Audio + Selected Landmark Driven
**(Some demo images above are sourced from image websites. If there is any infringement, we will immediately remove them and apologize.οΌ‰** ## Installation ### Download the Codes ```bash git clone https://github.com/BadToBest/EchoMimic cd EchoMimic ``` ### Python Environment Setup - Tested System Environment: Centos 7.2/Ubuntu 22.04, Cuda >= 11.7 - Tested GPUs: A100(80G) / RTX4090D (24G) / V100(16G) - Tested Python Version: 3.8 / 3.10 / 3.11 Create conda environment (Recommended): ```bash conda create -n echomimic python=3.8 conda activate echomimic ``` Install packages with `pip` ```bash pip install -r requirements.txt ``` ### Download ffmpeg-static Download and decompress [ffmpeg-static](https://www.johnvansickle.com/ffmpeg/old-releases/ffmpeg-4.4-amd64-static.tar.xz), then ``` export FFMPEG_PATH=/path/to/ffmpeg-4.4-amd64-static ``` ### Download pretrained weights ```shell git lfs install git clone https://huggingface.co/BadToBest/EchoMimic pretrained_weights ``` The **pretrained_weights** is organized as follows. ``` ./pretrained_weights/ β”œβ”€β”€ denoising_unet.pth β”œβ”€β”€ reference_unet.pth β”œβ”€β”€ motion_module.pth β”œβ”€β”€ face_locator.pth β”œβ”€β”€ sd-vae-ft-mse β”‚ └── ... β”œβ”€β”€ sd-image-variations-diffusers β”‚ └── ... └── audio_processor └── whisper_tiny.pt ``` In which **denoising_unet.pth** / **reference_unet.pth** / **motion_module.pth** / **face_locator.pth** are the main checkpoints of **EchoMimic**. Other models in this hub can be also downloaded from it's original hub, thanks to their brilliant works: - [sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse) - [sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers) - [audio_processor(whisper)](https://openaipublic.azureedge.net/main/whisper/models/65147644a518d12f04e32d6f3b26facc3f8dd46e5390956a9424a650c0ce22b9/tiny.pt) ### Audio-Drived Algo Inference Run the python inference script: ```bash python -u infer_audio2vid.py python -u infer_audio2vid_pose.py ``` ### Audio-Drived Algo Inference On Your Own Cases Edit the inference config file **./configs/prompts/animation.yaml**, and add your own case: ```bash test_cases: "path/to/your/image": - "path/to/your/audio" ``` The run the python inference script: ```bash python -u infer_audio2vid.py ``` ### Motion Alignment between Ref. Img. and Driven Vid. (Firstly download the checkpoints with '_pose.pth' postfix from huggingface) Edit driver_video and ref_image to your path in demo_motion_sync.py, then run ```bash python -u demo_motion_sync.py ``` ### Audio&Pose-Drived Algo Inference Edit ./configs/prompts/animation_pose.yaml, then run ```bash python -u infer_audio2vid_pose.py ``` ### Pose-Drived Algo Inference Set draw_mouse=True in line 135 of infer_audio2vid_pose.py. Edit ./configs/prompts/animation_pose.yaml, then run ```bash python -u infer_audio2vid_pose.py ``` ### Run the Gradio UI Thanks to the contribution from @Robin021: ```bash python -u webgui.py --server_port=3000 ``` ## Release Plans | Status | Milestone | ETA | |:--------:|:-------------------------------------------------------------------------|:--:| | βœ… | The inference source code of the Audio-Driven algo meet everyone on GitHub | 9th July, 2024 | | βœ… | Pretrained models trained on English and Mandarin Chinese to be released | 9th July, 2024 | | βœ… | The inference source code of the Pose-Driven algo meet everyone on GitHub | 13th July, 2024 | | βœ… | Pretrained models with better pose control to be released | 13th July, 2024 | | βœ… | Accelerated models to be released | 17th July, 2024 | | πŸš€ | Pretrained models with better sing performance to be released | TBD | | πŸš€ | Large-Scale and High-resolution Chinese-Based Talking Head Dataset | TBD | ## Acknowledgements We would like to thank the contributors to the [AnimateDiff](https://github.com/guoyww/AnimateDiff), [Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone) and [MuseTalk](https://github.com/TMElyralab/MuseTalk) repositories, for their open research and exploration. We are also grateful to [V-Express](https://github.com/tencent-ailab/V-Express) and [hallo](https://github.com/fudan-generative-vision/hallo) for their outstanding work in the area of diffusion-based talking heads. If we missed any open-source projects or related articles, we would like to complement the acknowledgement of this specific work immediately. ## Citation If you find our work useful for your research, please consider citing the paper : ``` @misc{chen2024echomimic, title={EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning}, author={Zhiyuan Chen, Jiajiong Cao, Zhiquan Chen, Yuming Li, Chenguang Ma}, year={2024}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## Star History [![Star History Chart](https://api.star-history.com/svg?repos=BadToBest/EchoMimic&type=Date)](https://star-history.com/?spm=5176.28103460.0.0.342a3da23STWrU#BadToBest/EchoMimic&Date)