Edit model card

FuzzyHazel, FuzzyAlmond

HazyAbyss - Download
OctaFuzz - Download
MareAcernis - Download
RefSlaveV2 - Download
dlfmaanjffhgkwl v2 - Download

Guardian Tales 三七-SAL-独轮车 | Chibi Style Lora 52 - Download
Komowata Haruka (こもわた遙華) Chibi Art Style LoRA - Download
Terada Tera (寺田てら) Art Style LoRA - Download
Yaro Artstyle LoRA - Download

EasyNegative and pastelmix-lora seem to work well with the models.

EasyNegative - Download
pastelmix-lora - Download

Formula

MBW
HazyAbyss.safetensors [d7b0072ef7]
octafuzz.safetensors [364bdf849d]
0000.safetensors
base_alpha=1
Weight_values=1,1,0,0,0,0.5,1,1,0.5,0,0,0,1,0,0,0,0.5,1,1,0.5,0,0,0,1,1

MBW
0000.safetensors [360691971b]
mareacernis.safetensors [fbc82b317d]
0001.safetensors
base_alpha=0
Weight_values=0.5,0,0,0,0,0,0,0,0.5,0.5,0,0,0.25,0.5,0.5,0.5,0.25,0.25,0.25,0.25,0.5,0.5,0.5,0,0

MBW
0001.safetensors [ac67bd1235]
refslavev2.safetensors [cce9a2d200]
0002.safetensors
base_alpha=0
Weight_values=0,0.5,1,1,0.5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1

MBW
0002.safetensors [cc5331b8ae]
dlf.safetensors [d596b45d6b]
FuzzyHazel.safetensors
base_alpha=0
Weight_values=0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0

SuperMerger LoRA Merge
model_0    : FuzzyHazel.safetensors
model_Out  : FuzzyAlmond.safetensors
LoRa       : lora:guardiantales:0.25, lora:komowata:0.25, lora:terada:0.25, lora:yaro:0.25

Samples

All of the images use following negatives/settings. EXIF preserved.

Negative prompt: (worst quality, low quality:1.4), EasyNegative, bad anatomy, bad hands, error, missing fingers, extra digit, fewer digits
Steps: 28, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 768x512, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires upscale: 1.5, Hires steps: 14, Hires upscaler: Latent (nearest-exact)

FuzzyHazel

A1 A2 A3 A4 A5 A6 A7 A8 AA AB AC AD

FuzzyAlmond

B1 B2 B3 B4 B5 B6 B7 B8 BA BB BC BD

Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.