PseudoTerminal X commited on
Commit
2209f51
1 Parent(s): 1bfdf8a

Model card auto-generated by SimpleTuner

Browse files
Files changed (1) hide show
  1. README.md +11 -9
README.md CHANGED
@@ -61,6 +61,8 @@ This is a LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.c
61
 
62
  The main validation prompt used during training was:
63
 
 
 
64
  ```
65
  julie, in photograph style
66
  ```
@@ -86,17 +88,17 @@ You may reuse the base model text encoder for inference.
86
 
87
  ## Training settings
88
 
89
- - Training epochs: 63
90
- - Training steps: 700
91
- - Learning rate: 1.0
92
- - Effective batch size: 1
93
  - Micro-batch size: 1
94
- - Gradient accumulation steps: 1
95
  - Number of GPUs: 1
96
  - Prediction type: flow-matching
97
  - Rescaled betas zero SNR: False
98
- - Optimizer: Prodigy
99
- - Precision: no
100
  - Xformers: Not used
101
  - LoRA Rank: 16
102
  - LoRA Alpha: 16.0
@@ -108,9 +110,9 @@ You may reuse the base model text encoder for inference.
108
 
109
  ### julia
110
  - Repeats: 0
111
- - Total number of images: 11
112
  - Total number of aspect buckets: 1
113
- - Resolution: 1.0 megapixels
114
  - Cropped: True
115
  - Crop style: random
116
  - Crop aspect: square
 
61
 
62
  The main validation prompt used during training was:
63
 
64
+
65
+
66
  ```
67
  julie, in photograph style
68
  ```
 
88
 
89
  ## Training settings
90
 
91
+ - Training epochs: 147
92
+ - Training steps: 2500
93
+ - Learning rate: 1e-05
94
+ - Effective batch size: 2
95
  - Micro-batch size: 1
96
+ - Gradient accumulation steps: 2
97
  - Number of GPUs: 1
98
  - Prediction type: flow-matching
99
  - Rescaled betas zero SNR: False
100
+ - Optimizer: AdamW, stochastic bf16
101
+ - Precision: Pure BF16
102
  - Xformers: Not used
103
  - LoRA Rank: 16
104
  - LoRA Alpha: 16.0
 
110
 
111
  ### julia
112
  - Repeats: 0
113
+ - Total number of images: 34
114
  - Total number of aspect buckets: 1
115
+ - Resolution: 512 px
116
  - Cropped: True
117
  - Crop style: random
118
  - Crop aspect: square