adamelliotfields commited on
Commit
7b8e908
1 Parent(s): de96e86

Update usage

Browse files
Files changed (1) hide show
  1. usage.md +7 -9
usage.md CHANGED
@@ -1,6 +1,6 @@
1
  ## Usage
2
 
3
- Enter a prompt and click `Generate`.
4
 
5
  ### Prompting
6
 
@@ -12,8 +12,6 @@ Positive and negative prompts are embedded by [Compel](https://github.com/damian
12
 
13
  Note that `++` is `1.1^2` (and so on). See [syntax features](https://github.com/damian0815/compel/blob/main/doc/syntax.md) to learn more and read [Civitai](https://civitai.com)'s guide on [prompting](https://education.civitai.com/civitais-prompt-crafting-guide-part-1-basics/) for best practices.
14
 
15
- You can also press the `🎲` button to generate a random prompt.
16
-
17
  #### Arrays
18
 
19
  Arrays allow you to generate different images from a single prompt. For example, `[[cat,corgi]]` will expand into 2 separate prompts. Make sure `Images` is set accordingly (e.g., 2). Only works for the positive prompt. Inspired by [Fooocus](https://github.com/lllyasviel/Fooocus/pull/1503).
@@ -59,25 +57,25 @@ Denoising strength is essentially how much the generation will differ from the i
59
 
60
  In an image-to-image pipeline, the input image is used as the initial latent. With [IP-Adapter](https://github.com/tencent-ailab/IP-Adapter) (Ye et al. 2023), the input image is processed by a separate image encoder and the encoded features are used as conditioning along with the text prompt.
61
 
62
- For capturing faces, enable `IP-Adapter Face` to use the full-face model. You should use an input image that is mostly a face along with the Realistic Vision model. The input image should also be the same aspect ratio as the output to avoid distortion.
63
 
64
  ### Advanced
65
 
66
  #### DeepCache
67
 
68
- [DeepCache](https://github.com/horseee/DeepCache) (Ma et al. 2023) caches lower UNet layers and reuses them every `Interval` steps:
69
- * `1`: no caching
70
- * `2`: more quality (default)
71
  * `3`: balanced
72
  * `4`: more speed
73
 
74
  #### FreeU
75
 
76
- [FreeU](https://github.com/ChenyangSi/FreeU) (Si et al. 2023) re-weights the contributions sourced from the UNet’s skip connections and backbone feature maps to potentially improve image quality.
77
 
78
  #### Clip Skip
79
 
80
- When enabled, the last CLIP layer is skipped. This can sometimes improve image quality with anime models.
81
 
82
  #### Tiny VAE
83
 
 
1
  ## Usage
2
 
3
+ Enter a prompt and click `Generate`. Roll the `🎲` for a random prompt.
4
 
5
  ### Prompting
6
 
 
12
 
13
  Note that `++` is `1.1^2` (and so on). See [syntax features](https://github.com/damian0815/compel/blob/main/doc/syntax.md) to learn more and read [Civitai](https://civitai.com)'s guide on [prompting](https://education.civitai.com/civitais-prompt-crafting-guide-part-1-basics/) for best practices.
14
 
 
 
15
  #### Arrays
16
 
17
  Arrays allow you to generate different images from a single prompt. For example, `[[cat,corgi]]` will expand into 2 separate prompts. Make sure `Images` is set accordingly (e.g., 2). Only works for the positive prompt. Inspired by [Fooocus](https://github.com/lllyasviel/Fooocus/pull/1503).
 
57
 
58
  In an image-to-image pipeline, the input image is used as the initial latent. With [IP-Adapter](https://github.com/tencent-ailab/IP-Adapter) (Ye et al. 2023), the input image is processed by a separate image encoder and the encoded features are used as conditioning along with the text prompt.
59
 
60
+ For capturing faces, enable `IP-Adapter Face` to use the full-face model. You should use an input image that is mostly a face along with the Realistic Vision model.
61
 
62
  ### Advanced
63
 
64
  #### DeepCache
65
 
66
+ [DeepCache](https://github.com/horseee/DeepCache) (Ma et al. 2023) caches lower UNet layers and reuses them every `Interval` steps. Trade quality for speed:
67
+ * `1`: no caching (default)
68
+ * `2`: more quality
69
  * `3`: balanced
70
  * `4`: more speed
71
 
72
  #### FreeU
73
 
74
+ [FreeU](https://github.com/ChenyangSi/FreeU) (Si et al. 2023) re-weights the contributions sourced from the UNet’s skip connections and backbone feature maps. Can sometimes improve image quality.
75
 
76
  #### Clip Skip
77
 
78
+ When enabled, the last CLIP layer is skipped. Can sometimes improve image quality.
79
 
80
  #### Tiny VAE
81