simpletuner-lora-sd35l-v57
This is a standard PEFT LoRA derived from sd3/unknown-model.
The main validation prompt used during training was:
illustration of a chef cat camping in the wilderness, grilling a fish over a campfire. The cat, wearing a traditional chef's hat and apron, stands next to a neatly arranged campsite with a small tent, a backpack, and camping gear scattered around. The campfire emits a warm, flickering glow, illuminating the cat's fur and the fish skewered above the flames. The surrounding forest is lush and green, with tall trees and a clear, night sky with many stars visible above. The cat's eyes are wide open, intently staring at the fish as it cooks. The scene captures the essence of outdoor adventure outdoor camping, warm campfire lighting, serene wilderness.
Validation settings
- CFG:
7.5
- CFG Rescale:
0.0
- Steps:
25
- Sampler:
FlowMatchEulerDiscreteScheduler
- Seed:
42
- Resolution:
1024
- Skip-layer guidance:
Note: The validation settings are not necessarily the same as the training settings.
You can find some example images in the following gallery:
The text encoder was not trained. You may reuse the base model text encoder for inference.
Training settings
Training epochs: 43
Training steps: 2000
Learning rate: 0.0002
- Learning rate schedule: cosine
- Warmup steps: 200
Max grad norm: 2.0
Effective batch size: 2
- Micro-batch size: 2
- Gradient accumulation steps: 1
- Number of GPUs: 1
Gradient checkpointing: True
Prediction type: flow-matching (extra parameters=['shift=3'])
Optimizer: adamw_bf16
Trainable parameter precision: Pure BF16
Caption dropout probability: 0.0%
LoRA Rank: 256
LoRA Alpha: 256.0
LoRA Dropout: 0.1
LoRA initialisation style: default
Datasets
mynewlora-sd35m
- Repeats: 0
- Total number of images: 87
- Total number of aspect buckets: 9
- Resolution: 1.0 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
Inference
import torch
from diffusers import DiffusionPipeline
model_id = '/my_volume/weights'
adapter_id = 'majestio/simpletuner-lora-sd35l-v57'
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
pipeline.load_lora_weights(adapter_id)
prompt = "illustration of a chef cat camping in the wilderness, grilling a fish over a campfire. The cat, wearing a traditional chef's hat and apron, stands next to a neatly arranged campsite with a small tent, a backpack, and camping gear scattered around. The campfire emits a warm, flickering glow, illuminating the cat's fur and the fish skewered above the flames. The surrounding forest is lush and green, with tall trees and a clear, night sky with many stars visible above. The cat's eyes are wide open, intently staring at the fish as it cooks. The scene captures the essence of outdoor adventure outdoor camping, warm campfire lighting, serene wilderness."
negative_prompt = 'blurry, cropped, ugly'
## Optional: quantise the model to save on vram.
## Note: The model was not quantised during training, so it is not necessary to quantise it during inference time.
#from optimum.quanto import quantize, freeze, qint8
#quantize(pipeline.transformer, weights=qint8)
#freeze(pipeline.transformer)
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
image = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
num_inference_steps=25,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
width=1024,
height=1024,
guidance_scale=7.5,
).images[0]
image.save("output.png", format="PNG")
- Downloads last month
- 0