SPIN-Diffusion
Collection
Models fine-tuned with SPIN-Diffusion across iterations 1, 2, 3, based on stable-diffusion 1.5
•
3 items
•
Updated
•
1
Self-Play Fine-Tuning of Diffusion Models for Text-to-Image Generation (https://huggingface.co/papers/2402.10210)
This model is a self-play fine-tuned diffusion model at iteration 3 from runwayml/stable-diffusion-v1-5 using synthetic data based on the winner images of the yuvalkirstain/pickapic_v2 dataset. We have also made a Gradio Demo at UCLA-AGI/SPIN-Diffusion-demo-v1.
The following hyperparameters were used during training:
To use the model, you must first load the SD1.5 base model and then substitute its unet with our fine-tuned version.
from diffusers import StableDiffusionPipeline, UNet2DConditionModel
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
unet_id = "UCLA-AGI/SPIN-Diffusion-iter3"
unet = UNet2DConditionModel.from_pretrained(unet_id, subfolder="unet", torch_dtype=torch.float16)
pipe.unet = unet
###The rest of your generation code
Metric | Best of Five | Mean | Median |
---|---|---|---|
HPS | 0.28 | 0.27 | 0.27 |
Aesthetic | 6.26 | 5.94 | 5.98 |
Image Reward | 1.13 | 0.53 | 0.67 |
Pickapic Score | 22.00 | 21.36 | 21.46 |
@misc{yuan2024self,
title={Self-Play Fine-Tuning of Diffusion Models for Text-to-Image Generation},
author={Yuan, Huizhuo and Chen, Zixiang and Ji, Kaixuan and Gu, Quanquan},
year={2024},
eprint={2402.10210},
archivePrefix={arXiv},
primaryClass={cs.LG}
}