Diffusers
Safetensors
WuerstchenPriorPipeline
patrickvonplaten commited on
Commit
030ae4a
·
1 Parent(s): 65b9bdd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -14
README.md CHANGED
@@ -4,20 +4,40 @@ license: mit
4
 
5
  ## How to run
6
 
7
- ```python
8
- import torch
9
- from diffusers import AutoPipelineForText2Image
10
- from diffusers.pipelines.wuerstchen import WuerstchenPrior
11
 
12
- prior_model = WuerstchenPrior.from_pretrained("warp-diffusion/wuerstchen-prior", torch_dtype=torch.float16)
13
- pipe = AutoPipelineForText2Image.from_pretrained("warp-diffusion/wuerstchen", prior_prior=prior_model, torch_dtype=torch.float16).to("cuda")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
- prompt = [
16
- "An old destroyed car standing on a cliff in norway, cinematic photography",
17
- "Western movie, closeup cinematic photography",
18
- "Pink nike shoe commercial, closeup cinematic photography",
19
- "Croatia, closeup cinematic photography",
20
- "South Tyrol mountains at sunset, closeup cinematic photography",
21
- ]
22
- images = pipe(prompt, guidance_scale=8.0, width=1024, height=1024).images
 
 
 
 
 
 
 
 
23
  ```
 
4
 
5
  ## How to run
6
 
7
+ This pipeline should be run together with https://huggingface.co/warp-diffusion/wuerstchen:
 
 
 
8
 
9
+ ```py
10
+ import torch
11
+ from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline
12
+
13
+ device = "cuda"
14
+ dtype = torch.float16
15
+ num_images_per_prompt = 2
16
+
17
+ prior_pipeline = WuerstchenPriorPipeline.from_pretrained(
18
+ "warp-diffusion/wuerstchen-prior", torch_dtype=dtype
19
+ ).to(device)
20
+ decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained(
21
+ "warp-diffusion/wuerstchen", torch_dtype=dtype
22
+ ).to(device)
23
+
24
+ caption = "A captivating artwork of a mysterious stone golem"
25
+ negative_prompt = ""
26
 
27
+ prior_output = prior_pipeline(
28
+ prompt=caption,
29
+ height=1024,
30
+ width=1024,
31
+ negative_prompt=negative_prompt,
32
+ guidance_scale=4.0,
33
+ num_images_per_prompt=num_images_per_prompt,
34
+ )
35
+ decoder_output = decoder_pipeline(
36
+ image_embeddings=prior_output.image_embeddings,
37
+ prompt=caption,
38
+ negative_prompt=negative_prompt,
39
+ num_images_per_prompt=num_images_per_prompt,
40
+ guidance_scale=0.0,
41
+ output_type="pil",
42
+ ).images
43
  ```