sshh12's picture
examples
02dba1d
metadata
license: cc-by-nc-4.0
base_model: stabilityai/stable-diffusion-xl-base-1.0
dataset: sshh12/planet-textures
tags:
  - stable-diffusion-xl
  - stable-diffusion-xl-diffusers
  - text-to-image
  - diffusers
  - lora
  - planets
  - space
  - procedural-generation
inference: false

These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the sshh12/planet-textures dataset.

Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.

Prompt Image
A turquoise-hued gas giant, streaked with swirling white wisps of high-altitude clouds, wrapped in a thin, multicolored ring system img_1
A barren desert planet, coated in fine rusty-red sand, pockmarked with deep, dark craters, and cloaked in a thin, hazy atmosphere img_2
A small, icy moon, encased in a shell of pure white ice and dust, covered with intricate patterns of frosty fissures img_3
A metallic asteroid, featuring a rugged, heavily cratered surface with a shiny, silver-grey coloration img_4
A vibrant blue planet, teeming with lush, tropical forests and deep, azure oceans, surrounded by a thick, oxygen-rich atmosphere img_5
A dwarf planet exhibiting a striking purple color, with a surface peppered with craters and towering ice formations img_6
A dusty, barren moon, characterized by a dull, yellowish-brown surface, marked by long, winding canyons and cliffs img_7

🧨 Diffusers Usage

import torch
from diffusers import DiffusionPipeline, AutoencoderKL

vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", vae=vae, torch_dtype=torch.float16, variant="fp16", use_safetensors=True)
pipe.load_lora_weights("sshh12/sdxl-lora-planet-textures")
pipe.to("cuda")

prompt = "A dwarf planet exhibiting a striking purple color, with a surface peppered with craters and towering ice formations"
negative_prompt = 'blurry, fuzzy, low resolution, cartoon, painting'

image = pipe(prompt=prompt, negative_prompt=negative_prompt, width=1024, height=512).images[0]
image

Training

GitHub: https://github.com/sshh12/planet-diffusion

MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
DATASET_NAME="sshh12/planet-textures"

!accelerate launch v2/train_text_to_image_lora_sdxl.py \
  --pretrained_model_name_or_path="$MODEL_NAME" \
  --pretrained_vae_model_name_or_path="madebyollin/sdxl-vae-fp16-fix" \
  --dataset_name="$DATASET_NAME" \
  --caption_column="text" \
  --width=1024 \
  --height=512 \
  --random_hflip \
  --random_vflip \
  --mixed_precision="fp16" \
  --use_8bit_adam \
  --train_batch_size=1 \
  --gradient_accumulation_steps=2 \
  --num_train_epochs=500 \
  --checkpointing_steps=100 \
  --learning_rate=1e-05 \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --seed=0 \
  --validation_epochs=5 \
  --validation_prompt_file="v2/validation_prompts.txt" \
  --enable_xformers_memory_efficient_attention \
  --report_to="wandb"