PseudoTerminal X commited on
Commit
cfc2f0d
1 Parent(s): 0b82d72

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -2
README.md CHANGED
@@ -117,7 +117,13 @@ widget:
117
 
118
  This is a LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
119
 
 
120
 
 
 
 
 
 
121
 
122
  The main validation prompt used during training was:
123
 
@@ -194,13 +200,12 @@ import torch
194
  from diffusers import DiffusionPipeline
195
 
196
  model_id = 'black-forest-labs/FLUX.1-dev'
197
- adapter_id = 'flux-dreambooth-lora'
198
  pipeline = DiffusionPipeline.from_pretrained(model_id)
199
  pipeline.load_lora_weights(adapter_id)
200
 
201
  prompt = "julie, in photograph style"
202
 
203
-
204
  pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
205
  image = pipeline(
206
  prompt=prompt,
@@ -213,3 +218,62 @@ image = pipeline(
213
  image.save("output.png", format="PNG")
214
  ```
215
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
 
118
  This is a LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
119
 
120
+ Two subjects were trained in, a character named Julia (AI) and a real person named River Phoenix.
121
 
122
+ Empirically, training two subjects in simultaneously kept the model from collapsing, though they don't train evenly - River Phoenix took longer than "Julia", possibly due to the synthetic nature of the data.
123
+
124
+ The photos of "Julia" came from Flux Pro. River Phoenix images were pulled from Google Image Search, with a focus on high resolution, high quality samples.
125
+
126
+ No captions were used during training, only instance prompts `julia` and `river phoenix`.
127
 
128
  The main validation prompt used during training was:
129
 
 
200
  from diffusers import DiffusionPipeline
201
 
202
  model_id = 'black-forest-labs/FLUX.1-dev'
203
+ adapter_id = 'ptx0/flux-dreambooth-lora-r16-dev'
204
  pipeline = DiffusionPipeline.from_pretrained(model_id)
205
  pipeline.load_lora_weights(adapter_id)
206
 
207
  prompt = "julie, in photograph style"
208
 
 
209
  pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
210
  image = pipeline(
211
  prompt=prompt,
 
218
  image.save("output.png", format="PNG")
219
  ```
220
 
221
+
222
+ ## SimpleTuner Config
223
+
224
+ The configuration used to train this model:
225
+
226
+ ```bash
227
+ export MODEL_TYPE='lora'
228
+ export TRAINING_SEED=420420420
229
+
230
+ export CHECKPOINTING_STEPS=500
231
+ export CHECKPOINTING_LIMIT=10
232
+
233
+ export LEARNING_RATE=1e-4
234
+
235
+ export FLUX=true
236
+ export MODEL_NAME="black-forest-labs/FLUX.1-dev"
237
+
238
+ export VALIDATION_SEED=420420420
239
+ export VALIDATION_PROMPT="julie, in photograph style"
240
+ export VALIDATION_NEGATIVE_PROMPT="blurry, cropped, ugly"
241
+
242
+ # How frequently we will save and run a pipeline for validations.
243
+ export VALIDATION_STEPS=500
244
+
245
+ # Validation image settings.
246
+ export VALIDATION_GUIDANCE=3.0
247
+ export VALIDATION_GUIDANCE_REAL=3.0
248
+ export VALIDATION_NUM_INFERENCE_STEPS=28
249
+ export VALIDATION_GUIDANCE_RESCALE=0
250
+ export VALIDATION_RESOLUTION=1024x1024
251
+
252
+ export ALLOW_TF32=true
253
+ export PURE_BF16=true
254
+
255
+ export CAPTION_DROPOUT_PROBABILITY=0
256
+
257
+ export MAX_NUM_STEPS=0
258
+ export NUM_EPOCHS=1000
259
+
260
+ export OPTIMIZER="adamw_bf16"
261
+ export LR_SCHEDULE="constant"
262
+ export LR_WARMUP_STEPS=500
263
+
264
+ export TRAIN_BATCH_SIZE=1
265
+
266
+ export RESOLUTION=512
267
+ export RESOLUTION_TYPE=pixel
268
+
269
+ export GRADIENT_ACCUMULATION_STEPS=2
270
+ export MIXED_PRECISION="bf16"
271
+ export TRAINING_DYNAMO_BACKEND='inductor'
272
+
273
+ export USE_XFORMERS=false
274
+ export USE_GRADIENT_CHECKPOINTING=true
275
+ export VAE_BATCH_SIZE=8
276
+ export TRAINER_EXTRA_ARGS="--aspect_bucket_worker_count=48 --lora_rank=16 --lora_alpha=16 --max_grad_norm=1.0 --gradient_precision=fp32 --base_model_default_dtype=bf16 --lora_init_type=default --flux_lora_target=all+ffs --user_prompt_library=user_prompt_library.json --webhook_config=webhooks.json --compress_disk_cache"
277
+
278
+ export USE_EMA=false
279
+ ```