Edit model card

LoRA text2image fine-tuning - KorAI/sdxl-base-1.0-onepiece-lora

These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the KorAI/onepiece-captioned dataset. You can find some example images in the following.

img_0 img_1 img_2 img_3

LoRA for the text encoder was enabled: True.

Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.

Intended uses & limitations

How to use

# TODO: add an example code snippet for running this diffusion pipeline
from diffusers import DiffusionPipeline
import torch

# Load Stable Diffusion XL Base1.0
pipe = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    variant="fp16",
    use_safetensors=True
).to("cuda")

# Optional CPU offloading to save some GPU Memory
pipe.enable_model_cpu_offload()

# Loading Trained LoRA Weights
pipe.load_lora_weights("KorAI/sdxl-base-1.0-onepiece-lora")

prompt = "Acilia Anime, anime character in a bikini with a sword and shield"

# Invoke pipeline to generate image
image = pipe(
    prompt = prompt,
    num_inference_steps=50,
    height=1024,
    width=1024,
    guidance_scale=7.0,
).images[0]

# Display image
image

# Save Image
image.save(f"sdxl_onepiece.png")

Limitations and bias

[TODO: provide examples of latent issues and potential remediations]

Training details

[TODO: describe the data used to train the model]

Downloads last month
3
Inference API
Examples

Model tree for KorAI/sdxl-base-1.0-onepiece-lora

Adapter
(4854)
this model