Init commit
Browse files- .gitattributes +4 -0
- README.md +58 -3
- config.json +42 -0
- diffusion_pytorch_model.bin +3 -0
- images/ddpo-aesthetic-samples.png +3 -0
- images/laion_1.png +3 -0
- images/laion_12.png +3 -0
- images/laion_60.png +3 -0
- model_index.json +12 -0
- scheduler_config.json +11 -0
.gitattributes
CHANGED
@@ -33,3 +33,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
images/ddpo-aesthetic-samples.png filter=lfs diff=lfs merge=lfs -text
|
37 |
+
images/laion_1.png filter=lfs diff=lfs merge=lfs -text
|
38 |
+
images/laion_12.png filter=lfs diff=lfs merge=lfs -text
|
39 |
+
images/laion_60.png filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,3 +1,58 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- pytorch
|
5 |
+
- diffusers
|
6 |
+
- unconditional-image-generation
|
7 |
+
---
|
8 |
+
|
9 |
+
# Fine-tuning `ddpm-celebahq-256` with DDPO for aesthetic quality enhancement
|
10 |
+
|
11 |
+
|
12 |
+
![](https://huggingface.co/alkzar90/ddpo-aesthetic-celebahq-256/resolve/main/images/laion_60.png)
|
13 |
+
|
14 |
+
|
15 |
+
**DDPO Paper**: [Training Diffusion Models with Reinforcement Learning](https://arxiv.org/abs/2305.13301)
|
16 |
+
|
17 |
+
**Authors**: Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, Sergey Levine
|
18 |
+
|
19 |
+
**Abstract**:
|
20 |
+
|
21 |
+
*Diffusion models are a class of flexible generative models trained with an approximation to the log-likelihood objective. However, most use cases of diffusion models are not concerned with likelihoods, but instead with downstream objectives such as human-perceived image quality or drug effectiveness. In this paper, we investigate reinforcement learning methods for directly optimizing diffusion models for such objectives. We describe how posing denoising as a multi-step decision-making problem enables a class of policy gradient algorithms, which we refer to as denoising diffusion policy optimization (DDPO), that are more effective than alternative reward-weighted likelihood approaches. Empirically, DDPO is able to adapt text-to-image diffusion models to objectives that are difficult to express via prompting, such as image compressibility, and those derived from human feedback, such as aesthetic quality. Finally, we show that DDPO can improve prompt-image alignment using feedback from a vision-language model without the need for additional data collection or human annotation. The project's website can be found at [this http URL](https://rl-diffusion.github.io/).*
|
22 |
+
|
23 |
+
## Inference
|
24 |
+
|
25 |
+
**DDPM** based models can use *discrete noise schedulers* such as:
|
26 |
+
|
27 |
+
- [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py)
|
28 |
+
- [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py)
|
29 |
+
- [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py)
|
30 |
+
|
31 |
+
for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest.
|
32 |
+
For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead.
|
33 |
+
|
34 |
+
See the following code:
|
35 |
+
|
36 |
+
```python
|
37 |
+
# !pip install diffusers
|
38 |
+
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
|
39 |
+
|
40 |
+
model_id = "alkzar90/ddpo-compressibility-celebahq-256"
|
41 |
+
|
42 |
+
# load model and scheduler
|
43 |
+
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
|
44 |
+
|
45 |
+
# run pipeline in inference (sample random noise and denoise)
|
46 |
+
image = ddpm()["sample"]
|
47 |
+
|
48 |
+
|
49 |
+
# save image
|
50 |
+
image[0].save("ddpm_generated_image.png")
|
51 |
+
```
|
52 |
+
|
53 |
+
For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)
|
54 |
+
|
55 |
+
## Samples
|
56 |
+
|
57 |
+
![Celeba-HQ 256x256 generated samples from the DDPO finetuned model optimized by
|
58 |
+
JPEG compressibility](https://huggingface.co/alkzar90/ddpo-aesthetic-celebahq-256/resolve/main/images/ddpo-aesthetic-samples.png)
|
config.json
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "UNet2DModel",
|
3 |
+
"_diffusers_version": "0.0.4",
|
4 |
+
"act_fn": "silu",
|
5 |
+
"attention_head_dim": null,
|
6 |
+
"block_out_channels": [
|
7 |
+
128,
|
8 |
+
128,
|
9 |
+
256,
|
10 |
+
256,
|
11 |
+
512,
|
12 |
+
512
|
13 |
+
],
|
14 |
+
"center_input_sample": false,
|
15 |
+
"down_block_types": [
|
16 |
+
"DownBlock2D",
|
17 |
+
"DownBlock2D",
|
18 |
+
"DownBlock2D",
|
19 |
+
"DownBlock2D",
|
20 |
+
"AttnDownBlock2D",
|
21 |
+
"DownBlock2D"
|
22 |
+
],
|
23 |
+
"downsample_padding": 0,
|
24 |
+
"flip_sin_to_cos": false,
|
25 |
+
"freq_shift": 1,
|
26 |
+
"in_channels": 3,
|
27 |
+
"layers_per_block": 2,
|
28 |
+
"mid_block_scale_factor": 1,
|
29 |
+
"norm_eps": 1e-06,
|
30 |
+
"norm_num_groups": 32,
|
31 |
+
"out_channels": 3,
|
32 |
+
"sample_size": 256,
|
33 |
+
"time_embedding_type": "positional",
|
34 |
+
"up_block_types": [
|
35 |
+
"UpBlock2D",
|
36 |
+
"AttnUpBlock2D",
|
37 |
+
"UpBlock2D",
|
38 |
+
"UpBlock2D",
|
39 |
+
"UpBlock2D",
|
40 |
+
"UpBlock2D"
|
41 |
+
]
|
42 |
+
}
|
diffusion_pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aeb8d851e4fcfa62bfb10f5ecab28aaf4f8a40b18e7e2cce996367bef476cc7b
|
3 |
+
size 454864818
|
images/ddpo-aesthetic-samples.png
ADDED
Git LFS Details
|
images/laion_1.png
ADDED
Git LFS Details
|
images/laion_12.png
ADDED
Git LFS Details
|
images/laion_60.png
ADDED
Git LFS Details
|
model_index.json
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "DDPMPipeline",
|
3 |
+
"_diffusers_version": "0.0.4",
|
4 |
+
"scheduler": [
|
5 |
+
"diffusers",
|
6 |
+
"DDPMScheduler"
|
7 |
+
],
|
8 |
+
"unet": [
|
9 |
+
"diffusers",
|
10 |
+
"UNet2DModel"
|
11 |
+
]
|
12 |
+
}
|
scheduler_config.json
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "DDPMScheduler",
|
3 |
+
"_diffusers_version": "0.1.1",
|
4 |
+
"beta_end": 0.02,
|
5 |
+
"beta_schedule": "linear",
|
6 |
+
"beta_start": 0.0001,
|
7 |
+
"clip_sample": true,
|
8 |
+
"num_train_timesteps": 1000,
|
9 |
+
"trained_betas": null,
|
10 |
+
"variance_type": "fixed_small"
|
11 |
+
}
|