ljp's picture
Update README.md
dff0fb6 verified
|
raw
history blame
4.44 kB
metadata
license: other
language:
  - en
pipeline_tag: text-to-image
tags:
  - stable-diffusion
  - alimama-creative
library_name: diffusers

SD3 ControlNet Inpainting

SD3

a woman wearing a white jacket, black hat and black pants is standing in a field, the hat writes SD3

bucket_alibaba

a person wearing a white shoe, carrying a white bucket with text "alibaba" on it

Finetuned controlnet inpainting model based on sd3-medium, the inpainting model offers several advantages:

  • Leveraging the SD3 16-channel VAE and high-resolution generation capability at 1024, the model effectively preserves the integrity of non-inpainting regions, including text.

  • It is capable of generating text through inpainting.

  • It demonstrates superior aesthetic performance in portrait generation.

Compared with SDXL-Inpainting

From left to right: Input image, Masked image, SDXL inpainting, Ours.

0

a tiger sitting on a park bench

1

a dog sitting on a park bench

2

a young woman wearing a blue and pink floral dress

3

a woman wearing a white jacket, black hat and black pants is standing in a field, the hat writes SD3

4

an air conditioner hanging on the bedroom wall

Using with Diffusers

Step1: Make sure you upgrade to the latest version of diffusers(>=0.29.2): pip install -U diffusers.

Step2: Download the two required Python files from GitHub. (We will merge this Feature to official Diffusers.)

Step3: And then you can run demo.py or following:

from diffusers.utils import load_image, check_min_version
import torch

# Local File
from controlnet_sd3 import SD3ControlNetModel
from pipeline_stable_diffusion_3_controlnet_inpainting import StableDiffusion3ControlNetInpaintingPipeline

check_min_version("0.29.2")

# Build model
controlnet = SD3ControlNetModel.from_pretrained(
    "alimama-creative/SD3-Controlnet-Inpainting",
    use_safetensors=True,
    extra_conditioning_channels=1,
)
pipe = StableDiffusion3ControlNetInpaintingPipeline.from_pretrained(
    "stabilityai/stable-diffusion-3-medium-diffusers",
    controlnet=controlnet,
    torch_dtype=torch.float16,
)
pipe.text_encoder.to(torch.float16)
pipe.controlnet.to(torch.float16)
pipe.to("cuda")

# Load image
image = load_image(
    "https://huggingface.co/alimama-creative/SD3-Controlnet-Inpainting/blob/main/images/prod.png"
)
mask = load_image(
    "https://huggingface.co/alimama-creative/SD3-Controlnet-Inpainting/blob/main/images/mask.jpeg"
)

# Set args
width = 1024
height = 1024
prompt="a woman wearing a white jacket, black hat and black pants is standing in a field, the hat writes SD3"
generator = torch.Generator(device="cuda").manual_seed(24)

# Inference
res_image = pipe(
    negative_prompt='deformed, distorted, disfigured, poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, mutated hands and fingers, disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation, NSFW',
    prompt=prompt,
    height=height,
    width=width,
    control_image = image,
    control_mask = mask,
    num_inference_steps=28,
    generator=generator,
    controlnet_conditioning_scale=0.95,
    guidance_scale=7,
).images[0]

res_image.save(f'sd3.png')

Training Detail

The model was trained on 12M laion2B and internal source images for 20k steps at resolution 1024x1024.

  • Mixed precision : FP16
  • Learning rate : 1e-4
  • Batch size : 192
  • Timestep sampling mode : 'logit_normal'
  • Loss : Flow Matching

Limitation

Due to the fact that only 1024*1024 pixel resolution was used during the training phase, the inference performs best at this size, with other sizes yielding suboptimal results. We will initiate multi-resolution training in the future, and at that time, we will open-source the new weights.

LICENSE

The model is based on SD3 finetuning; therefore, the license follows the original SD3 license.