HighCWu commited on
Commit
bec9d0d
1 Parent(s): 8292d81

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -3
README.md CHANGED
@@ -17,7 +17,7 @@ inference: true
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
 
20
- # sd-control-lora-v3-HighCWu/sdxl-control-lora-v3-canny-half_skip_attn-rank16-conv_in-rank64
21
 
22
  These are control-lora-v3 weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning.
23
  You can find some example images below.
@@ -37,8 +37,58 @@ prompt: portrait of a dancing eagle woman, beautiful blonde haired lakota sioux
37
 
38
  #### How to use
39
 
40
- ```python
41
- # TODO: add an example code snippet for running this diffusion pipeline
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  ```
43
 
44
  #### Limitations and bias
 
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
 
20
+ # sdxl-control-lora-v3-canny
21
 
22
  These are control-lora-v3 weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning.
23
  You can find some example images below.
 
37
 
38
  #### How to use
39
 
40
+ First clone the [control-lora-v3](https://github.com/HighCWu/control-lora-v3) and `cd` in the directory:
41
+ ```sh
42
+ git clone https://github.com/HighCWu/control-lora-v3
43
+ cd control-lora-v3
44
+ ```
45
+
46
+ Then run the python code:
47
+ ```py
48
+ # !pip install opencv-python transformers accelerate
49
+ from diffusers import AutoencoderKL
50
+ from diffusers.utils import load_image
51
+ from model import UNet2DConditionModelEx
52
+ from pipeline_sdxl import StableDiffusionXLControlLoraV3Pipeline
53
+ import numpy as np
54
+ import torch
55
+
56
+ import cv2
57
+ from PIL import Image
58
+
59
+ prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
60
+ negative_prompt = "low quality, bad quality, sketches"
61
+
62
+ # download an image
63
+ image = load_image(
64
+ "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png"
65
+ )
66
+
67
+ # initialize the models and pipeline
68
+ unet: UNet2DConditionModelEx = UNet2DConditionModelEx.from_pretrained(
69
+ "stabilityai/stable-diffusion-xl-base-1.0", subfolder="unet", torch_dtype=torch.float16
70
+ )
71
+ unet = unet.add_extra_conditions(["canny"])
72
+ vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
73
+ pipe = StableDiffusionXLControlLoraV3Pipeline.from_pretrained(
74
+ "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, vae=vae, torch_dtype=torch.float16
75
+ )
76
+ # load attention processors
77
+ pipe.load_lora_weights("HighCWu/sdxl-control-lora-v3-canny")
78
+ pipe.enable_model_cpu_offload()
79
+
80
+ # get canny image
81
+ image = np.array(image)
82
+ image = cv2.Canny(image, 100, 200)
83
+ image = image[:, :, None]
84
+ image = np.concatenate([image, image, image], axis=2)
85
+ canny_image = Image.fromarray(image)
86
+
87
+ # generate image
88
+ image = pipe(
89
+ prompt, image=canny_image
90
+ ).images[0]
91
+ image.show()
92
  ```
93
 
94
  #### Limitations and bias