Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,36 @@ For details on the development and training of our model, please refer to our bl
|
|
28 |
- **Summary:** This model generates images based on text prompts. It is a Latent Diffusion Model that uses two fixed, pre-trained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L). It follows the same architecture as [Stable Diffusion XL](https://huggingface.co/docs/diffusers/en/using-diffusers/sdxl).
|
29 |
|
30 |
### Using the model with 🧨 Diffusers
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
### Using the model with Automatic1111/ComfyUI
|
34 |
|
|
|
28 |
- **Summary:** This model generates images based on text prompts. It is a Latent Diffusion Model that uses two fixed, pre-trained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L). It follows the same architecture as [Stable Diffusion XL](https://huggingface.co/docs/diffusers/en/using-diffusers/sdxl).
|
29 |
|
30 |
### Using the model with 🧨 Diffusers
|
31 |
+
|
32 |
+
Install diffusers >= 0.26.0 and some dependencies:
|
33 |
+
|
34 |
+
```
|
35 |
+
pip install transformers accelerate safetensors
|
36 |
+
```
|
37 |
+
|
38 |
+
To run our model, you will need to use our custom pipeline from this gist: https://gist.github.com/aykamko/402e948a8fdbbc9613f9978802d90194
|
39 |
+
|
40 |
+
**Notes:**
|
41 |
+
- Only the Euler, Heun, and DPM++ 2M Karras schedulers have been tested
|
42 |
+
- We recommend using `guidance_scale=7.0` for the Euler/Heun, and `guidance_scale=5.0` for DPM++ 2M Karras
|
43 |
+
|
44 |
+
Then, run the following snippet:
|
45 |
+
|
46 |
+
```
|
47 |
+
# Include code from gist: https://gist.github.com/aykamko/402e948a8fdbbc9613f9978802d90194
|
48 |
+
|
49 |
+
pipe = PlaygroundV2dot5Pipeline.from_pretrained(
|
50 |
+
"playgroundai/playground-v2.5-1024px-aesthetic",
|
51 |
+
torch_dtype=torch.float16,
|
52 |
+
use_safetensors=True,
|
53 |
+
add_watermarker=False,
|
54 |
+
variant="fp16",
|
55 |
+
)
|
56 |
+
pipe.to("cuda")
|
57 |
+
|
58 |
+
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
|
59 |
+
image = pipe(prompt=prompt, guidance_scale=7.0).images[0]
|
60 |
+
```
|
61 |
|
62 |
### Using the model with Automatic1111/ComfyUI
|
63 |
|