Trained for 306 epochs and 7000 steps.
Browse filesTrained with datasets ['makima-text-embeds', 'makima-512']
Learning rate 0.0002, batch size 32, and 1 gradient accumulation steps.
Used DDPM noise scheduler for training with epsilon prediction type and rescaled_betas_zero_snr=False
Using 'trailing' timestep spacing.
Base model: black-forest-labs/FLUX.1-dev
VAE: None
pytorch_lora_weights.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 194456044
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:394b8a855eddc833db7ba5f4532be12673b9411ba1525b3e22531292ed1870db
|
3 |
size 194456044
|