Image2image doesn't seem to work?
Hey everyone!
So I tried to adapt the example for image2image (https://huggingface.co/docs/diffusers/using-diffusers/img2img) to this model, but no luck so far. It looks like the AutoPipelineForImage2Image
wants to see a model_index.json
that doesn't exist here? Sorry if it's a too much of a newbie question. Here's the error too
This isn't a full model , it's only a unet.
Try something like...
unet = UNet2DConditionModel.from_pretrained("latent-consistency/lcm-sdxl", torch_dtype=torch.float16, variant="fp16")
pipe = AutoPipelineForImage2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16")
Note I haven't tested this or even checked it for syntax.
Alternatively I "frankensteined" a full model at Vargol/lcm_sdxl_full_model