benjamin-paine
commited on
Commit
•
57f7c80
1
Parent(s):
a467d97
Update README.md
Browse files
README.md
CHANGED
@@ -1,10 +1,39 @@
|
|
1 |
---
|
2 |
license: openrail++
|
3 |
---
|
4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
|
6 |
These can be added directly to any initialized UNet to inject DPO training into it. See the code below for usage (diffusers only.)
|
7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
```py
|
9 |
from __future__ import annotations
|
10 |
|
|
|
1 |
---
|
2 |
license: openrail++
|
3 |
---
|
4 |
+
|
5 |
+
# Contents
|
6 |
+
|
7 |
+
This repository contains:
|
8 |
+
1. Half-Precision LoRA versions of https://huggingface.co/mhdang/dpo-sdxl-text2image-v1 and https://huggingface.co/mhdang/dpo-sd1.5-text2image-v1.
|
9 |
+
2. Full-Precision offset versions of https://huggingface.co/mhdang/dpo-sdxl-text2image-v1 and https://huggingface.co/mhdang/dpo-sd1.5-text2image-v1.
|
10 |
+
|
11 |
+
# Creation
|
12 |
+
|
13 |
+
## LoRA
|
14 |
+
|
15 |
+
The LoRA were created using Kohya SS.
|
16 |
+
|
17 |
+
1.5: https://civitai.com/models/240850/sd15-direct-preference-optimization-dpo extracted from https://huggingface.co/fp16-guy/Stable-Diffusion-v1-5_fp16_cleaned/blob/main/sd_1.5.safetensors.
|
18 |
+
XL: https://civitai.com/models/238319/sd-xl-dpo-finetune-direct-preference-optimization extracted from https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0_0.9vae.safetensors
|
19 |
+
|
20 |
+
## Offsets
|
21 |
+
|
22 |
+
The offsets were calculated in Pytorch using the following formula:
|
23 |
+
|
24 |
+
1.5: https://huggingface.co/mhdang/dpo-sd1.5-text2image-v1/blob/main/unet/diffusion_pytorch_model.safetensors - https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/unet/diffusion_pytorch_model.bin
|
25 |
+
XL: https://huggingface.co/mhdang/dpo-sdxl-text2image-v1/blob/main/unet/diffusion_pytorch_model.safetensors - https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/unet/diffusion_pytorch_model.safetensors
|
26 |
|
27 |
These can be added directly to any initialized UNet to inject DPO training into it. See the code below for usage (diffusers only.)
|
28 |
|
29 |
+
# License
|
30 |
+
|
31 |
+
These models are derivces from all OpenRail++ models, and are licensed under OpenRail++ themselves.
|
32 |
+
|
33 |
+
# Usage
|
34 |
+
|
35 |
+
## Offsets
|
36 |
+
|
37 |
```py
|
38 |
from __future__ import annotations
|
39 |
|