metadata
library_name: diffusers
license: openrail++
language:
- en
tags:
- text-to-image
- stable-diffusion
- lora
- safetensors
- stable-diffusion-xl
base_model: Linaqruf/animagine-xl-2.0
widget:
- text: >-
face focus, cute, masterpiece, best quality, 1girl, green hair, sweater,
looking at viewer, upper body, beanie, outdoors, night, turtleneck
parameter:
negative_prompt: >-
lowres, bad anatomy, bad hands, text, error, missing fingers, extra
digit, fewer digits, cropped, worst quality, low quality, normal
quality, jpeg artifacts, signature, watermark, username, blurry
example_title: 1girl
- text: >-
face focus, bishounen, masterpiece, best quality, 1boy, green hair,
sweater, looking at viewer, upper body, beanie, outdoors, night,
turtleneck
parameter:
negative_prompt: >-
lowres, bad anatomy, bad hands, text, error, missing fingers, extra
digit, fewer digits, cropped, worst quality, low quality, normal
quality, jpeg artifacts, signature, watermark, username, blurry
example_title: 1boy
Anime Detailer XL LoRA
Overview
Anime Detailer XL LoRA is a cutting-edge LoRA adapter designed to work alongside Animagine XL 2.0. This unique model specializes in concept modulation, enabling users to adjust the level of detail in generated anime-style images. By manipulating a concept slider, users can create images ranging from highly detailed to more abstract representations.
Model Details
- Developed by: Linaqruf
- Model type: LoRA adapter for Stable Diffusion XL
- Model Description: This adapter is a concept slider, allowing users to control the level of detail in anime-themed images. The closer the slider is set to 2, the more detailed the result; closer to -2, the less detailed. It is a versatile tool for artists and creators seeking various artistic expressions within anime imagery.
- License: CreativeML Open RAIL++-M License
- Finetuned from model: Animagine XL 2.0
🧨 Diffusers Installation
Ensure the installation of the latest diffusers
library, along with other essential packages:
pip install diffusers --upgrade
pip install transformers accelerate safetensors
The following Python script demonstrates how to utilize the LoRA with Animagine XL 2.0. The default scheduler is EulerAncestralDiscreteScheduler, but it can be explicitly defined for clarity.
import torch
from diffusers import (
StableDiffusionXLPipeline,
EulerAncestralDiscreteScheduler,
AutoencoderKL
)
# Initialize LoRA model and weights
lora_model_id = "Linaqruf/anime-detailer-xl-lora"
lora_filename = "anime-detailer-xl.safetensors"
lora_scale_slider = 2 # -2 for less detailed result
# Load VAE component
vae = AutoencoderKL.from_pretrained(
"madebyollin/sdxl-vae-fp16-fix",
torch_dtype=torch.float16
)
# Configure the pipeline
pipe = StableDiffusionXLPipeline.from_pretrained(
"Linaqruf/animagine-xl-2.0",
vae=vae,
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16"
)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.to('cuda')
# Load and fuse LoRA weights
pipe.load_lora_weights(lora_model_id, weight_name=lora_filename)
pipe.fuse_lora(lora_scale=lora_scale_slider)
# Define prompts and generate image
prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck"
negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
image = pipe(
prompt,
negative_prompt=negative_prompt,
width=1024,
height=1024,
guidance_scale=12,
num_inference_steps=50
).images[0]
# Unfuse LoRA before saving the image
pipe.unfuse_lora()
image.save("anime_girl.png")
Acknowledgements
Our project has been enriched by the following significant works:
- Erasing Concepts from Diffusion Models by Rohit Gandikota et al.
- LECO by p1atdev.
- AI Toolkit by Ostris.