Post
1006
Single Block / Layer FLUX LoRA Training Research Results and LoRA Network Alpha Change Impact With LoRA Network Rank Dimension
Full article posted here : https://medium.com/@furkangozukara/single-block-layer-flux-lora-training-research-results-and-lora-network-alpha-change-impact-with-e713cc89c567
Conclusions
As expected, as you train lesse parameters e.g. LoRA vs Full Fine Tuning or Single Blocks LoRA vs all Blocks LoRA, your quality get reduced
Of course you earn some extra VRAM memory reduction and also some reduced size on the disk
Moreover, lesser parameters reduces the overfitting and realism of the FLUX model, so if you are into stylized outputs like comic, it may work better
Furthermore, when you reduce LoRA Network Rank, keep original Network Alpha unless you are going to do a new Learning Rate research
Finally, very best and least overfitting is achieved with full Fine Tuning
Check figure 3 and figure 4 last columns — I make extracted LoRA Strength / Weight 1.1 instead of 1.0
Full fine tuning configs and instructions > https://www.patreon.com/posts/112099700
Second best one is extracting a LoRA from Fine Tuned model if you need a LoRA
Check figure 3 and figure 4 last columns — I make extracted LoRA Strength / Weight 1.1 instead of 1.0
Extract LoRA guide (public article) : https://www.patreon.com/posts/112335162
Third is doing a all layers regular LoRA training
Full guide, configs and instructions > https://www.patreon.com/posts/110879657
And the worst quality is training lesser blocks / layers with LoRA
Full configs are included in > https://www.patreon.com/posts/110879657
So how much VRAM and Speed single block LoRA training brings?
All layers 16 bit is 27700 MB (4.85 second / it) and 1 single block is 25800 MB (3.7 second / it)
All layers 8 bit is 17250 MB (4.85 second / it) and 1 single block is 15700 MB (3.8 second / it)
Image Raw Links
Figure 0 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests
Full article posted here : https://medium.com/@furkangozukara/single-block-layer-flux-lora-training-research-results-and-lora-network-alpha-change-impact-with-e713cc89c567
Conclusions
As expected, as you train lesse parameters e.g. LoRA vs Full Fine Tuning or Single Blocks LoRA vs all Blocks LoRA, your quality get reduced
Of course you earn some extra VRAM memory reduction and also some reduced size on the disk
Moreover, lesser parameters reduces the overfitting and realism of the FLUX model, so if you are into stylized outputs like comic, it may work better
Furthermore, when you reduce LoRA Network Rank, keep original Network Alpha unless you are going to do a new Learning Rate research
Finally, very best and least overfitting is achieved with full Fine Tuning
Check figure 3 and figure 4 last columns — I make extracted LoRA Strength / Weight 1.1 instead of 1.0
Full fine tuning configs and instructions > https://www.patreon.com/posts/112099700
Second best one is extracting a LoRA from Fine Tuned model if you need a LoRA
Check figure 3 and figure 4 last columns — I make extracted LoRA Strength / Weight 1.1 instead of 1.0
Extract LoRA guide (public article) : https://www.patreon.com/posts/112335162
Third is doing a all layers regular LoRA training
Full guide, configs and instructions > https://www.patreon.com/posts/110879657
And the worst quality is training lesser blocks / layers with LoRA
Full configs are included in > https://www.patreon.com/posts/110879657
So how much VRAM and Speed single block LoRA training brings?
All layers 16 bit is 27700 MB (4.85 second / it) and 1 single block is 25800 MB (3.7 second / it)
All layers 8 bit is 17250 MB (4.85 second / it) and 1 single block is 15700 MB (3.8 second / it)
Image Raw Links
Figure 0 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests