Abstract
Diffusion models are the de facto approach for generating high-quality images and videos, but learning high-dimensional models remains a formidable task due to computational and optimization challenges. Existing methods often resort to training cascaded models in pixel space or using a downsampled latent space of a separately trained auto-encoder. In this paper, we introduce Matryoshka Diffusion Models(MDM), an end-to-end framework for high-resolution image and video synthesis. We propose a diffusion process that denoises inputs at multiple resolutions jointly and uses a NestedUNet architecture where features and parameters for small-scale inputs are nested within those of large scales. In addition, MDM enables a progressive training schedule from lower to higher resolutions, which leads to significant improvements in optimization for high-resolution generation. We demonstrate the effectiveness of our approach on various benchmarks, including class-conditioned image generation, high-resolution text-to-image, and text-to-video applications. Remarkably, we can train a single pixel-space model at resolutions of up to 1024x1024 pixels, demonstrating strong zero-shot generalization using the CC12M dataset, which contains only 12 million images.
Community
The model's grasp of form and structure seems remarkably strong for being trained on such a small dataset. It's on par with, if not better than SDXL in that regard! I imagine this has partially to do with the T5 encoder, but the architecture and progressive training certainly make a big difference.
I feel like if we combine this paper's architectural/training advancements with DALLE 3's strategy of training on highly detailed machine-generated captions, and scaled all of this up to something like LAION-2B, it could result in a very strong model.
oh oh oh
Next-Level Image and Video Generation: Matryoshka Diffusion Models!
Links π:
π Subscribe: https://www.youtube.com/@Arxflix
π Twitter: https://x.com/arxflix
π LMNT (Partner): https://lmnt.com/