--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-xl-base-1.0 pipeline_tag: text-to-image library_name: diffusers tags: - art - text-to-image - stable-diffusion - lora - diffusers widget: - text: chibi doll, cute --- # cutton_doll_lora-xl ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64366a453193f279361ced90/_hPyQqPAkHWyaCHoPzxXQ.png) ### Need more performance? Use it with a LCM Lora! Use 8 steps and guidance scale of 1.5 1.2 Lora strength for the Pixel Art XL works better ```python from diffusers import DiffusionPipeline, LCMScheduler import torch model_id = "stabilityai/stable-diffusion-xl-base-1.0" lcm_lora_id = "latent-consistency/lcm-lora-sdxl" pipe = DiffusionPipeline.from_pretrained(model_id, variant="fp16") pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) pipe.load_lora_weights(lcm_lora_id, adapter_name="lora") pipe.load_lora_weights("./cutton_doll_lora-xl.safetensors", adapter_name="doll_sdxl") pipe.set_adapters(["lora", "doll_sdxl"], adapter_weights=[1.0, 1.2]) pipe.to(device="cuda", dtype=torch.float16) prompt = "a chibi doll, cute" negative_prompt = "3d render, realistic" num_images = 9 for i in range(num_images): img = pipe( prompt=prompt, negative_prompt=negative_prompt, num_inference_steps=8, guidance_scale=1.5, ).images[0] img.save(f"lcm_lora_{i}.png") ``` ### Tips: Don't use refiner Works great with only 1 text encoder No style prompt required No trigger keyword require Works great with isometric and non-isometric Works with 0.9 and 1.0 #### Changelog v1: Initial release