TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation
Abstract
Despite significant advancements in customizing text-to-image and video generation models, generating images and videos that effectively integrate multiple personalized concepts remains a challenging task. To address this, we present TweedieMix, a novel method for composing customized diffusion models during the inference phase. By analyzing the properties of reverse diffusion sampling, our approach divides the sampling process into two stages. During the initial steps, we apply a multiple object-aware sampling technique to ensure the inclusion of the desired target objects. In the later steps, we blend the appearances of the custom concepts in the de-noised image space using Tweedie's formula. Our results demonstrate that TweedieMix can generate multiple personalized concepts with higher fidelity than existing methods. Moreover, our framework can be effortlessly extended to image-to-video diffusion models, enabling the generation of videos that feature multiple personalized concepts. Results and source code are in our anonymous project page.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CustomCrafter: Customized Video Generation with Preserving Motion and Concept Composition Abilities (2024)
- TextBoost: Towards One-Shot Personalization of Text-to-Image Models via Fine-tuning Text Encoder (2024)
- One-Shot Learning Meets Depth Diffusion in Multi-Object Videos (2024)
- DiffLoRA: Generating Personalized Low-Rank Adaptation Weights with Diffusion (2024)
- Enhancing Conditional Image Generation with Explainable Latent Space Manipulation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper