--- tags: - text-to-image - stable-diffusion - dress - try on license: apache-2.0 language: - en library_name: diffusers --- # IMAGDressing: Interactive Modular Apparel Generation for Dressing ## IMAGDressing-v1: Customizable Virtual Dressing
[**Project Page**](https://imagdressing.github.io/) **|** [**Paper (Coming Soon)**](xx) **|** [**Code**](https://github.com/muzishen/IMAGDressing)
--- ## Introduction To address the need for flexible and controllable customizations in virtual try-on systems, we propose IMAGDressing-v1. Specifically, we introduce a garment UNet that captures semantic features from CLIP and texture features from VAE. Our hybrid attention module includes a frozen self-attention and a trainable cross-attention, integrating these features into a frozen denoising UNet to ensure user-controlled editing. We will release a comprehensive dataset, IGv1, with over 200,000 pairs of clothing and dressed images, and establish a standard data assembly pipeline. Furthermore, IMAGDressing-v1 can be combined with extensions like ControlNet, IP-Adapter, T2I-Adapter, and AnimateDiff to enhance diversity and controllability. ![framework](assets/pipeline.png)