基于SDXL模型LoRA微调实现《少前2:追放》文生图
一、Model Library
- 微调数据集:基于SDXL模型的《少女前线2:追放》LoRA微调数据集
- 预训练模型:stable_diffusion_xl
- 底模:animagine-xl-3.0
- SDXL LoRA微调训练器:kohya_ss
- 数据集画质增强:waifu2x
二、Prompt Dict
- 少前2追放角色
- 佩里缇亚: PKPSP
- 塞布丽娜: SPAS12
- 托洛洛: AKAlfa
- 桑朵莱希: G36
- 琼玖: QBZ191
- 维普雷: Vepr12
- 莫辛纳甘: MosinNagant
- 黛烟: QBZ95
- 克罗丽科: Kroliko
- 夏克里: XCRL
- 奇塔: MP7
- 寇尔芙: TaurusCurve
- 科谢尼娅: APS
- 纳甘: Nagant1895
- 纳美西丝: OM50
- 莉塔拉: GalilARM
- 闪电: OTs14
- Pixiv画师风格
- おにねこ(鬼猫): Onineko26
- 麻生: AsouAsabu
- mignon: Mignon
- migolu: Migolu
三、使用方式
- 安装部分环境(默认已安装pytorch等必要环境)
pip install diffusers --upgrade
pip install transformers accelerate safetensors
- 使用Hugging Face下载并使用底模(animagine-xl-3.0)和LoRA模型
import torch
import datetime
from PIL import Image
import matplotlib.pyplot as plt
from diffusers import (
StableDiffusionXLPipeline,
EulerAncestralDiscreteScheduler,
AutoencoderKL
)
lora_id = "TfiyuenLau/GirlsFrontline2_SDXL_LoRA"
vae = AutoencoderKL.from_pretrained(
"madebyollin/sdxl-vae-fp16-fix",
torch_dtype=torch.float16
)
pipe = StableDiffusionXLPipeline.from_pretrained(
"cagliostrolab/animagine-xl-3.0",
vae=vae,
torch_dtype=torch.float16,
use_safetensors=True,
)
pipe.load_lora_weights(lora_id)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.to('cuda')
- 生成图像
output = "./output.png"
prompt = "1girl, OTs14, gloves, looking at viewer, smile, food, holding, solo, closed mouth, sitting, yellow eyes, black gloves, masterpiece, best quality"
negative_prompt = "nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"
image = pipe(
prompt,
negative_prompt=negative_prompt,
width=1024,
height=1024,
guidance_scale=7,
num_inference_steps=28
).images[0]
image.save(output)
image = Image.open(output)
plt.axis('off')
plt.imshow(image)
image.close()