|
--- |
|
library_name: diffusers |
|
license: cc-by-nc-2.0 |
|
base_model: |
|
- black-forest-labs/FLUX.1-Fill-dev |
|
pipeline_tag: image-to-image |
|
tags: |
|
- tryon |
|
- vto |
|
--- |
|
|
|
# Model Card for CAT-Tryoff-Flux |
|
|
|
CAT-Tryoff-Flux is an advanced tryoff model. It used the same method of (CATVTON-FLUX)[https://huggingface.co/xiaozaa/catvton-flux-alpha]. This model can extract and reconstruct the front view of clothing items from images of people wearing them. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
- **Developed by:** [X/Twitter:Black Magic An](https://x.com/MrsZaaa) |
|
|
|
### Model Sources [optional] |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** [github](https://github.com/nftblackmagic/catvton-flux) |
|
|
|
## Uses |
|
|
|
The model is designed for virtual try-off applications, allowing users to visualize how different garments would look on a person. It can be used directly through command-line interface with the following parameters: |
|
|
|
Input person image |
|
Person mask |
|
Garment image |
|
Random seed (optional) |
|
|
|
## How to Get Started with the Model |
|
|
|
``` |
|
transformer = FluxTransformer2DModel.from_pretrained( |
|
"xiaozaa/cat-tryoff-flux", |
|
torch_dtype=torch.bfloat16 |
|
) |
|
pipe = FluxFillPipeline.from_pretrained( |
|
"black-forest-labs/FLUX.1-dev", |
|
transformer=transformer, |
|
torch_dtype=torch.bfloat16 |
|
).to("cuda") |
|
|
|
|
|
|
|
``` |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
VITON-HD dataset |
|
|
|
### Training Procedure |
|
|
|
Finetuning Flux1-dev-fill |
|
|
|
|
|
## Evaluation |
|
|
|
#### Summary |
|
|
|
|
|
|
|
**BibTeX:** |
|
``` |
|
@misc{chong2024catvtonconcatenationneedvirtual, |
|
title={CatVTON: Concatenation Is All You Need for Virtual Try-On with Diffusion Models}, |
|
author={Zheng Chong and Xiao Dong and Haoxiang Li and Shiyue Zhang and Wenqing Zhang and Xujie Zhang and Hanqing Zhao and Xiaodan Liang}, |
|
year={2024}, |
|
eprint={2407.15886}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CV}, |
|
url={https://arxiv.org/abs/2407.15886}, |
|
} |
|
@article{lhhuang2024iclora, |
|
title={In-Context LoRA for Diffusion Transformers}, |
|
author={Huang, Lianghua and Wang, Wei and Wu, Zhi-Fan and Shi, Yupeng and Dou, Huanzhang and Liang, Chen and Feng, Yutong and Liu, Yu and Zhou, Jingren}, |
|
journal={arXiv preprint arxiv:2410.23775}, |
|
year={2024} |
|
} |
|
``` |