playerzer0x commited on
Commit
938e8ea
1 Parent(s): 7c14a44

Model card auto-generated by SimpleTuner

Browse files
Files changed (1) hide show
  1. README.md +365 -0
README.md ADDED
@@ -0,0 +1,365 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ base_model: "FLUX.1-dev"
4
+ tags:
5
+ - flux
6
+ - flux-diffusers
7
+ - text-to-image
8
+ - diffusers
9
+ - simpletuner
10
+ - not-for-all-audiences
11
+ - lora
12
+ - template:sd-lora
13
+ - lycoris
14
+ inference: true
15
+
16
+ ---
17
+
18
+ # growwithdaisy/dsycam_20241115_133438
19
+
20
+ This is a LyCORIS adapter derived from [FLUX.1-dev](https://huggingface.co/FLUX.1-dev).
21
+
22
+
23
+ The main validation prompt used during training was:
24
+
25
+
26
+
27
+ ```
28
+ a photo of a daisy
29
+ ```
30
+
31
+ ## Validation settings
32
+ - CFG: `3.5`
33
+ - CFG Rescale: `0.0`
34
+ - Steps: `20`
35
+ - Sampler: `None`
36
+ - Seed: `69`
37
+ - Resolution: `1024x1024`
38
+
39
+ Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
40
+
41
+
42
+
43
+
44
+ <Gallery />
45
+
46
+ The text encoder **was not** trained.
47
+ You may reuse the base model text encoder for inference.
48
+
49
+
50
+ ## Training settings
51
+
52
+ - Training epochs: 0
53
+ - Training steps: 500
54
+ - Learning rate: 0.0001
55
+ - Max grad norm: 2.0
56
+ - Effective batch size: 16
57
+ - Micro-batch size: 2
58
+ - Gradient accumulation steps: 1
59
+ - Number of GPUs: 8
60
+ - Prediction type: flow-matching (extra parameters=['shift=3', 'flux_guidance_value=1.0'])
61
+ - Rescaled betas zero SNR: False
62
+ - Optimizer: optimi-stableadamwweight_decay=1e-3
63
+ - Precision: Pure BF16
64
+ - Quantised: No
65
+ - Xformers: Not used
66
+ - LyCORIS Config:
67
+ ```json
68
+ {
69
+ "algo": "lokr",
70
+ "multiplier": 1,
71
+ "linear_dim": 1000000,
72
+ "linear_alpha": 1,
73
+ "factor": 12,
74
+ "init_lokr_norm": 0.001,
75
+ "apply_preset": {
76
+ "target_module": [
77
+ "FluxTransformerBlock",
78
+ "FluxSingleTransformerBlock"
79
+ ],
80
+ "module_algo_map": {
81
+ "Attention": {
82
+ "factor": 12
83
+ },
84
+ "FeedForward": {
85
+ "factor": 6
86
+ }
87
+ }
88
+ }
89
+ }
90
+ ```
91
+
92
+ ## Datasets
93
+
94
+ ### mnmlsmo_architecture_photography_style-512
95
+ - Repeats: 0
96
+ - Total number of images: ~10456
97
+ - Total number of aspect buckets: 7
98
+ - Resolution: 0.262144 megapixels
99
+ - Cropped: False
100
+ - Crop style: None
101
+ - Crop aspect: None
102
+ - Used for regularisation data: No
103
+ ### mnmlsmo_architecture_photography_style-768
104
+ - Repeats: 0
105
+ - Total number of images: ~9000
106
+ - Total number of aspect buckets: 10
107
+ - Resolution: 0.589824 megapixels
108
+ - Cropped: False
109
+ - Crop style: None
110
+ - Crop aspect: None
111
+ - Used for regularisation data: No
112
+ ### mnmlsmo_architecture_photography_style-1024
113
+ - Repeats: 2
114
+ - Total number of images: ~3960
115
+ - Total number of aspect buckets: 1
116
+ - Resolution: 1.048576 megapixels
117
+ - Cropped: False
118
+ - Crop style: None
119
+ - Crop aspect: None
120
+ - Used for regularisation data: No
121
+ ### mnmlsmo_art_photography_style-512
122
+ - Repeats: 0
123
+ - Total number of images: ~448
124
+ - Total number of aspect buckets: 6
125
+ - Resolution: 0.262144 megapixels
126
+ - Cropped: False
127
+ - Crop style: None
128
+ - Crop aspect: None
129
+ - Used for regularisation data: No
130
+ ### mnmlsmo_art_photography_style-768
131
+ - Repeats: 0
132
+ - Total number of images: ~416
133
+ - Total number of aspect buckets: 7
134
+ - Resolution: 0.589824 megapixels
135
+ - Cropped: False
136
+ - Crop style: None
137
+ - Crop aspect: None
138
+ - Used for regularisation data: No
139
+ ### mnmlsmo_art_photography_style-1024
140
+ - Repeats: 1
141
+ - Total number of images: ~232
142
+ - Total number of aspect buckets: 9
143
+ - Resolution: 1.048576 megapixels
144
+ - Cropped: False
145
+ - Crop style: None
146
+ - Crop aspect: None
147
+ - Used for regularisation data: No
148
+ ### mnmlsmo_furniture_photography_style-512
149
+ - Repeats: 0
150
+ - Total number of images: ~3888
151
+ - Total number of aspect buckets: 13
152
+ - Resolution: 0.262144 megapixels
153
+ - Cropped: False
154
+ - Crop style: None
155
+ - Crop aspect: None
156
+ - Used for regularisation data: No
157
+ ### mnmlsmo_furniture_photography_style-768
158
+ - Repeats: 0
159
+ - Total number of images: ~3352
160
+ - Total number of aspect buckets: 13
161
+ - Resolution: 0.589824 megapixels
162
+ - Cropped: False
163
+ - Crop style: None
164
+ - Crop aspect: None
165
+ - Used for regularisation data: No
166
+ ### mnmlsmo_furniture_photography_style-1024
167
+ - Repeats: 1
168
+ - Total number of images: ~1728
169
+ - Total number of aspect buckets: 4
170
+ - Resolution: 1.048576 megapixels
171
+ - Cropped: False
172
+ - Crop style: None
173
+ - Crop aspect: None
174
+ - Used for regularisation data: No
175
+ ### mnmlsmo_homewares_photography_style-512
176
+ - Repeats: 0
177
+ - Total number of images: ~1096
178
+ - Total number of aspect buckets: 6
179
+ - Resolution: 0.262144 megapixels
180
+ - Cropped: False
181
+ - Crop style: None
182
+ - Crop aspect: None
183
+ - Used for regularisation data: No
184
+ ### mnmlsmo_homewares_photography_style-768
185
+ - Repeats: 0
186
+ - Total number of images: ~1072
187
+ - Total number of aspect buckets: 3
188
+ - Resolution: 0.589824 megapixels
189
+ - Cropped: False
190
+ - Crop style: None
191
+ - Crop aspect: None
192
+ - Used for regularisation data: No
193
+ ### mnmlsmo_homewares_photography_style-1024
194
+ - Repeats: 1
195
+ - Total number of images: ~520
196
+ - Total number of aspect buckets: 2
197
+ - Resolution: 1.048576 megapixels
198
+ - Cropped: False
199
+ - Crop style: None
200
+ - Crop aspect: None
201
+ - Used for regularisation data: No
202
+ ### mnmlsmo_interiors_photography_style-512
203
+ - Repeats: 0
204
+ - Total number of images: ~1336
205
+ - Total number of aspect buckets: 4
206
+ - Resolution: 0.262144 megapixels
207
+ - Cropped: False
208
+ - Crop style: None
209
+ - Crop aspect: None
210
+ - Used for regularisation data: No
211
+ ### mnmlsmo_interiors_photography_style-768
212
+ - Repeats: 0
213
+ - Total number of images: ~1312
214
+ - Total number of aspect buckets: 5
215
+ - Resolution: 0.589824 megapixels
216
+ - Cropped: False
217
+ - Crop style: None
218
+ - Crop aspect: None
219
+ - Used for regularisation data: No
220
+ ### mnmlsmo_interiors_photography_style-1024
221
+ - Repeats: 1
222
+ - Total number of images: ~800
223
+ - Total number of aspect buckets: 1
224
+ - Resolution: 1.048576 megapixels
225
+ - Cropped: False
226
+ - Crop style: None
227
+ - Crop aspect: None
228
+ - Used for regularisation data: No
229
+ ### mnmlsmo_lighting_photography_style-512
230
+ - Repeats: 0
231
+ - Total number of images: ~504
232
+ - Total number of aspect buckets: 5
233
+ - Resolution: 0.262144 megapixels
234
+ - Cropped: False
235
+ - Crop style: None
236
+ - Crop aspect: None
237
+ - Used for regularisation data: No
238
+ ### mnmlsmo_lighting_photography_style-768
239
+ - Repeats: 0
240
+ - Total number of images: ~504
241
+ - Total number of aspect buckets: 5
242
+ - Resolution: 0.589824 megapixels
243
+ - Cropped: False
244
+ - Crop style: None
245
+ - Crop aspect: None
246
+ - Used for regularisation data: No
247
+ ### mnmlsmo_lighting_photography_style-1024
248
+ - Repeats: 0
249
+ - Total number of images: ~320
250
+ - Total number of aspect buckets: 4
251
+ - Resolution: 1.048576 megapixels
252
+ - Cropped: False
253
+ - Crop style: None
254
+ - Crop aspect: None
255
+ - Used for regularisation data: No
256
+ ### mnmlsmo_moods_photography_style-512
257
+ - Repeats: 0
258
+ - Total number of images: ~680
259
+ - Total number of aspect buckets: 3
260
+ - Resolution: 0.262144 megapixels
261
+ - Cropped: False
262
+ - Crop style: None
263
+ - Crop aspect: None
264
+ - Used for regularisation data: No
265
+ ### mnmlsmo_moods_photography_style-768
266
+ - Repeats: 0
267
+ - Total number of images: ~680
268
+ - Total number of aspect buckets: 3
269
+ - Resolution: 0.589824 megapixels
270
+ - Cropped: False
271
+ - Crop style: None
272
+ - Crop aspect: None
273
+ - Used for regularisation data: No
274
+ ### mnmlsmo_moods_photography_style-1024
275
+ - Repeats: 1
276
+ - Total number of images: ~352
277
+ - Total number of aspect buckets: 2
278
+ - Resolution: 1.048576 megapixels
279
+ - Cropped: False
280
+ - Crop style: None
281
+ - Crop aspect: None
282
+ - Used for regularisation data: No
283
+ ### mnmlsmo_technology_photography_style-512
284
+ - Repeats: 0
285
+ - Total number of images: ~680
286
+ - Total number of aspect buckets: 4
287
+ - Resolution: 0.262144 megapixels
288
+ - Cropped: False
289
+ - Crop style: None
290
+ - Crop aspect: None
291
+ - Used for regularisation data: No
292
+ ### mnmlsmo_technology_photography_style-768
293
+ - Repeats: 0
294
+ - Total number of images: ~680
295
+ - Total number of aspect buckets: 4
296
+ - Resolution: 0.589824 megapixels
297
+ - Cropped: False
298
+ - Crop style: None
299
+ - Crop aspect: None
300
+ - Used for regularisation data: No
301
+ ### mnmlsmo_technology_photography_style-1024
302
+ - Repeats: 1
303
+ - Total number of images: ~376
304
+ - Total number of aspect buckets: 3
305
+ - Resolution: 1.048576 megapixels
306
+ - Cropped: False
307
+ - Crop style: None
308
+ - Crop aspect: None
309
+ - Used for regularisation data: No
310
+
311
+
312
+ ## Inference
313
+
314
+
315
+ ```python
316
+ import torch
317
+ from diffusers import DiffusionPipeline
318
+ from lycoris import create_lycoris_from_weights
319
+
320
+
321
+ def download_adapter(repo_id: str):
322
+ import os
323
+ from huggingface_hub import hf_hub_download
324
+ adapter_filename = "pytorch_lora_weights.safetensors"
325
+ cache_dir = os.environ.get('HF_PATH', os.path.expanduser('~/.cache/huggingface/hub/models'))
326
+ cleaned_adapter_path = repo_id.replace("/", "_").replace("\\", "_").replace(":", "_")
327
+ path_to_adapter = os.path.join(cache_dir, cleaned_adapter_path)
328
+ path_to_adapter_file = os.path.join(path_to_adapter, adapter_filename)
329
+ os.makedirs(path_to_adapter, exist_ok=True)
330
+ hf_hub_download(
331
+ repo_id=repo_id, filename=adapter_filename, local_dir=path_to_adapter
332
+ )
333
+
334
+ return path_to_adapter_file
335
+
336
+ model_id = 'FLUX.1-dev'
337
+ adapter_repo_id = 'playerzer0x/growwithdaisy/dsycam_20241115_133438'
338
+ adapter_filename = 'pytorch_lora_weights.safetensors'
339
+ adapter_file_path = download_adapter(repo_id=adapter_repo_id)
340
+ pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
341
+ lora_scale = 1.0
342
+ wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_file_path, pipeline.transformer)
343
+ wrapper.merge_to()
344
+
345
+ prompt = "a photo of a daisy"
346
+
347
+
348
+ ## Optional: quantise the model to save on vram.
349
+ ## Note: The model was not quantised during training, so it is not necessary to quantise it during inference time.
350
+ #from optimum.quanto import quantize, freeze, qint8
351
+ #quantize(pipeline.transformer, weights=qint8)
352
+ #freeze(pipeline.transformer)
353
+
354
+ pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
355
+ image = pipeline(
356
+ prompt=prompt,
357
+ num_inference_steps=20,
358
+ generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
359
+ width=1024,
360
+ height=1024,
361
+ guidance_scale=3.5,
362
+ ).images[0]
363
+ image.save("output.png", format="PNG")
364
+ ```
365
+