Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,8 @@
|
|
1 |
# Proteus Rundiffusion x DPO (Direct Preference Optimization) SDXL
|
2 |
|
3 |
-
|
|
|
|
|
4 |
|
5 |
## Preface
|
6 |
|
@@ -99,7 +101,9 @@ If you prefer being more loose with how the prompt will be interpreted, or you w
|
|
99 |
|
100 |
I tested multiple sampler/scheduler/CFG/Step combinations before deciding a foundation for the tests. This was another subjective choice, and I'm sure other parameters could have very different - and probably even better - outcomes. These were sufficient for testing my goal though:
|
101 |
|
102 |
-
|
|
|
|
|
103 |
|
104 |
The workflow is available in the [Huggingface repository](https://huggingface.co/PikkieWyn/Proteus-RunDiffusion-DPO_Merge/tree/main/comparisons). The comparison images are also the original, unaltered output images, and have their workflows embedded. They're all the same, of course, the only variable being the model used and the prompt text for each test. If you plan on loading the workflow, Use Everywhere (UE Nodes) was used to neaten the flow (CLIP weight and seed distribution), and OneButtonPrompt's preset node was used to generate some random base prompts - so you'll either need to install these or make the necessary tweaks to bypass them.
|
105 |
|
@@ -125,11 +129,13 @@ These parameters gave me the best results - according to my preference, but also
|
|
125 |
|
126 |
If you're not familiar with the extension and would like to try it out, it can produce some amazing results on its own. Below is a relatively safe setup for fairly consistent results:
|
127 |
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
|
|
|
|
133 |
|
134 |
I'll give two brief (subjective) comments the 1st and 4th comparisons as they tie in with earlier statemetns, the rest I'll leave to individual interpretation(as it should be).
|
135 |
|
|
|
1 |
# Proteus Rundiffusion x DPO (Direct Preference Optimization) SDXL
|
2 |
|
3 |
+
```note
|
4 |
+
An Experimental Merge. I run through a brief summary of both models below. Image results are at the end of the post if you're not interested in - or are already familiar with - the idea behind Proteus & DPO.
|
5 |
+
```
|
6 |
|
7 |
## Preface
|
8 |
|
|
|
101 |
|
102 |
I tested multiple sampler/scheduler/CFG/Step combinations before deciding a foundation for the tests. This was another subjective choice, and I'm sure other parameters could have very different - and probably even better - outcomes. These were sufficient for testing my goal though:
|
103 |
|
104 |
+
```note
|
105 |
+
Tests were run in ComfyUI with the model's CLIP and VAE, no Loras and with no pre/post-processing. Worth noting is that DataPulse recommends a CLIP skip of -2. I did not make any modification to the CLIP skip layer during my merge variant testing. My only adjustment was setting the CLIP scaling to 4 to improve the final quality and clarity of the images. The workdlow was kept as simple as possible, running only base sampling in a single KSampler node. Any errors or small issues were left as is for consisency.
|
106 |
+
```
|
107 |
|
108 |
The workflow is available in the [Huggingface repository](https://huggingface.co/PikkieWyn/Proteus-RunDiffusion-DPO_Merge/tree/main/comparisons). The comparison images are also the original, unaltered output images, and have their workflows embedded. They're all the same, of course, the only variable being the model used and the prompt text for each test. If you plan on loading the workflow, Use Everywhere (UE Nodes) was used to neaten the flow (CLIP weight and seed distribution), and OneButtonPrompt's preset node was used to generate some random base prompts - so you'll either need to install these or make the necessary tweaks to bypass them.
|
109 |
|
|
|
129 |
|
130 |
If you're not familiar with the extension and would like to try it out, it can produce some amazing results on its own. Below is a relatively safe setup for fairly consistent results:
|
131 |
|
132 |
+
```note
|
133 |
+
- Set your CFG to a very low value, I usually stick with 1.5 as 1 tends turn to chaos very easily, especially at such high CFG levels
|
134 |
+
- Set the "mimic scale" to 30 and "threshold percentile" to 0.9 (if you lower the CFG to a range of 18-26 0.95 works well)
|
135 |
+
- Set both mimic modes to "Half Cosine Up", both scale minimum values to 4 and the scheduler value to 4
|
136 |
+
|
137 |
+
The rest of your inputs and processing steps can be used as per usual. I've only hit walls with some Loras that can't handle the extremes, resulting in either a jumbled mess or pure black/blue/grey outputs. So if you experience this, look at removing loras before lowering your mimic level, etc.
|
138 |
+
```
|
139 |
|
140 |
I'll give two brief (subjective) comments the 1st and 4th comparisons as they tie in with earlier statemetns, the rest I'll leave to individual interpretation(as it should be).
|
141 |
|