--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: black-forest-labs/FLUX.1-dev pipeline_tag: text-to-image instance_prompt: BLOK_02.CR2 widget: - text: >- BLOK_02.CR2, 1905, full height, a best quality color photo portrait of Alexander Blok writing a poem in 1905, curly hair, Edwardian coat output: url: images/example_y2ucjfyiz.png - text: >- young BLOK_02.CR2 in Petersburg, 1905, full height, a best quality color photo portrait of Alexander Blok strolling in St Petersburg in 1905 while writing a poem in 1905, curly hair, Edwardian coat output: url: images/example_pat2lbcsh.png - text: >- Generated example for model AlekseyCalvin/Alexander_BLOK_Flux_LoRA_SilverAgePoets_v3. Prompt: young BLOK_02.CR2 in Petersburg, 1905, full height, a best quality color photo portrait of Alexander Blok strolling in St Petersburg in 1905 while writing a poem in 1905, curly hair, Edwardian coat output: url: images/example_bvut2tyo4.png --- # Alexander Blok Flux Low-Rank Adapter (LoRA) for SilverAgePoets.com Version 3 (aka "2_1") An adapter to reproduce the likeness of the legendary Symbolist/Modernist Russian and Soviet poet:
**Alexander Blok** *(b.1880-d.1921)*
[CLICK HERE TO READ OUR TRANSLATION OF BLOK'S "STRANGER"/"NEZNAKOMKA"](https://www.silveragepoets.com/blokstranger) This version of our Blok LoRA was the product of an experimental training to transfer face/attribute-features from historical photos with minimal compute time by using a high rank training, and a relatively high learning rate, but with a minimal number of steps.
This is the version at rank128 (linear_dims+alpha), lr of 0.0005, batch size 2 (with a dataset of only 12 images, but x3 resolutions: 512, 768, 1024), minimalist descriptive captions with a dropout of .09 (9%), adamw8bit optimizer, and only 50 steps (!), with no warmups.
All in all, we consider the experiment fairly successful, and exceptionally demonstrative of the unprecedentedly absorbent learning capacity not just among FLUX models, but DiT-based models more broadly.
We will soon reproduce this experiment on one of the homebrew de-distilled versions of FLUX, and see whether fast learning improves or diminishes without the extra steering from distilled guidance during fine-tuning.
And we are most curious about whether the re-introduction of distilled guidance during inference still zeroes in even on low-step learning.
Higher step learning with de-distilled Flux models has so far demonstated broader potentials to any other technique.
But that's neither here nor there, as this particular LoRA was trained on a regular distilled version anyhow.
**Regarding file size:**
Ideally, we ought to extract only the learned features into a much smaller-sized LoRA file.
We will get around to doing that eventually.
For now, anyone interested in using this locally, please forgive for us the huge file size!
**Regarding the weird trigger word:**
We have learned about a Flux model knowledge-activation prompting technique of approximating digital camera file names in prompts.
This technique (prompting with tokens like "*object*_02.cr2", etc) demonstrably results in more natural "raw" looking photorealistic outputs from the Flux base model(s). So, we decided to run a parallel co-experiment to see whether and how this base knowledge might affect fine-tune training.
So, instead of using the poet's name, we simply used a camera file name-like token of 'BLOK_02.CR2'.
## Trigger words You should use `BLOK_02.CR2` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('AlekseyCalvin/BlokFlux2_1', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)