qaihm-bot commited on
Commit
8576be4
1 Parent(s): 6cf8fca

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +49 -34
README.md CHANGED
@@ -16,7 +16,7 @@ tags:
16
 
17
  Generates high resolution images from text prompts using a latent diffusion model. This model uses CLIP ViT-L/14 as text encoder, U-Net based latent denoising, and VAE based decoder to generate the final image.
18
 
19
- This model is an implementation of Stable-Diffusion-v2.1 found [here](https://github.com/CompVis/stable-diffusion/tree/main).
20
  This repository provides scripts to run Stable-Diffusion-v2.1 on Qualcomm® devices.
21
  More details on model performance across various devices, can be found
22
  [here](https://aihub.qualcomm.com/models/stable_diffusion_v2_1_quantized).
@@ -32,16 +32,23 @@ More details on model performance across various devices, can be found
32
  - VAE Decoder Number of parameters: 83M
33
  - Model size: 1GB
34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
 
37
 
38
- | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
39
- | ---|---|---|---|---|---|---|---|
40
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Binary | 11.633 ms | 0 - 1 MB | INT8 | NPU | [TextEncoder_Quantized.bin](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/TextEncoder_Quantized.bin)
41
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Binary | 217.134 ms | 0 - 2 MB | INT8 | NPU | [VAEDecoder_Quantized.bin](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/VAEDecoder_Quantized.bin)
42
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Binary | 101.094 ms | 0 - 2 MB | INT8 | NPU | [UNet_Quantized.bin](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/UNet_Quantized.bin)
43
-
44
-
45
 
46
  ## Installation
47
 
@@ -97,30 +104,34 @@ device. This script does the following:
97
  ```bash
98
  python -m qai_hub_models.models.stable_diffusion_v2_1_quantized.export
99
  ```
100
-
101
  ```
102
- Profile Job summary of TextEncoder_Quantized
103
- --------------------------------------------------
104
- Device: QCS8550 (Proxy) (12)
105
- Estimated Inference Time: 10.70 ms
106
- Estimated Peak Memory Range: 0.03-1.24 MB
107
- Compute Units: NPU (1040) | Total (1040)
108
-
109
- Profile Job summary of VAEDecoder_Quantized
110
- --------------------------------------------------
111
- Device: QCS8550 (Proxy) (12)
112
- Estimated Inference Time: 225.42 ms
113
- Estimated Peak Memory Range: 0.40-1.52 MB
114
- Compute Units: NPU (170) | Total (170)
115
-
116
- Profile Job summary of UNet_Quantized
117
- --------------------------------------------------
118
- Device: QCS8550 (Proxy) (12)
119
- Estimated Inference Time: 96.63 ms
120
- Estimated Peak Memory Range: 0.53-1.92 MB
121
- Compute Units: NPU (6361) | Total (6361)
122
-
123
-
 
 
 
 
 
124
  ```
125
 
126
 
@@ -231,15 +242,19 @@ provides instructions on how to use the `.so` shared library or `.bin` context b
231
  Get more details on Stable-Diffusion-v2.1's performance across various devices [here](https://aihub.qualcomm.com/models/stable_diffusion_v2_1_quantized).
232
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
233
 
 
234
  ## License
235
- - The license for the original implementation of Stable-Diffusion-v2.1 can be found
236
- [here](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE).
237
- - The license for the compiled assets for on-device deployment can be found [here](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE)
 
238
 
239
  ## References
240
  * [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)
241
  * [Source Model Implementation](https://github.com/CompVis/stable-diffusion/tree/main)
242
 
 
 
243
  ## Community
244
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
245
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
 
16
 
17
  Generates high resolution images from text prompts using a latent diffusion model. This model uses CLIP ViT-L/14 as text encoder, U-Net based latent denoising, and VAE based decoder to generate the final image.
18
 
19
+ This model is an implementation of Stable-Diffusion-v2.1 found [here]({source_repo}).
20
  This repository provides scripts to run Stable-Diffusion-v2.1 on Qualcomm® devices.
21
  More details on model performance across various devices, can be found
22
  [here](https://aihub.qualcomm.com/models/stable_diffusion_v2_1_quantized).
 
32
  - VAE Decoder Number of parameters: 83M
33
  - Model size: 1GB
34
 
35
+ | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
+ |---|---|---|---|---|---|---|---|---|
37
+ | TextEncoder_Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 11.633 ms | 0 - 1 MB | INT8 | NPU | [Stable-Diffusion-v2.1.bin](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/TextEncoder_Quantized.bin) |
38
+ | TextEncoder_Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 7.759 ms | 0 - 8 MB | INT8 | NPU | [Stable-Diffusion-v2.1.bin](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/TextEncoder_Quantized.bin) |
39
+ | TextEncoder_Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 11.773 ms | 0 - 0 MB | INT8 | NPU | Use Export Script |
40
+ | TextEncoder_Quantized | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 10.7 ms | 0 - 1 MB | UINT16 | NPU | Use Export Script |
41
+ | VAEDecoder_Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 217.134 ms | 0 - 2 MB | INT8 | NPU | [Stable-Diffusion-v2.1.bin](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/VAEDecoder_Quantized.bin) |
42
+ | VAEDecoder_Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 161.705 ms | 0 - 8 MB | INT8 | NPU | [Stable-Diffusion-v2.1.bin](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/VAEDecoder_Quantized.bin) |
43
+ | VAEDecoder_Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 220.179 ms | 0 - 0 MB | INT8 | NPU | Use Export Script |
44
+ | VAEDecoder_Quantized | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 225.416 ms | 0 - 2 MB | UINT16 | NPU | Use Export Script |
45
+ | UNet_Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 101.094 ms | 0 - 2 MB | INT8 | NPU | [Stable-Diffusion-v2.1.bin](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/UNet_Quantized.bin) |
46
+ | UNet_Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 72.62 ms | 0 - 8 MB | INT8 | NPU | [Stable-Diffusion-v2.1.bin](https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/blob/main/UNet_Quantized.bin) |
47
+ | UNet_Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 102.486 ms | 0 - 0 MB | INT8 | NPU | Use Export Script |
48
+ | UNet_Quantized | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 96.631 ms | 1 - 2 MB | UINT16 | NPU | Use Export Script |
49
 
50
 
51
 
 
 
 
 
 
 
 
52
 
53
  ## Installation
54
 
 
104
  ```bash
105
  python -m qai_hub_models.models.stable_diffusion_v2_1_quantized.export
106
  ```
 
107
  ```
108
+ Profiling Results
109
+ ------------------------------------------------------------
110
+ TextEncoder_Quantized
111
+ Device : Samsung Galaxy S23 (13)
112
+ Runtime : QNN
113
+ Estimated inference time (ms) : 11.6
114
+ Estimated peak memory usage (MB): [0, 1]
115
+ Total # Ops : 1040
116
+ Compute Unit(s) : NPU (1040 ops)
117
+
118
+ ------------------------------------------------------------
119
+ VAEDecoder_Quantized
120
+ Device : Samsung Galaxy S23 (13)
121
+ Runtime : QNN
122
+ Estimated inference time (ms) : 217.1
123
+ Estimated peak memory usage (MB): [0, 2]
124
+ Total # Ops : 170
125
+ Compute Unit(s) : NPU (170 ops)
126
+
127
+ ------------------------------------------------------------
128
+ UNet_Quantized
129
+ Device : Samsung Galaxy S23 (13)
130
+ Runtime : QNN
131
+ Estimated inference time (ms) : 101.1
132
+ Estimated peak memory usage (MB): [0, 2]
133
+ Total # Ops : 6361
134
+ Compute Unit(s) : NPU (6361 ops)
135
  ```
136
 
137
 
 
242
  Get more details on Stable-Diffusion-v2.1's performance across various devices [here](https://aihub.qualcomm.com/models/stable_diffusion_v2_1_quantized).
243
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
244
 
245
+
246
  ## License
247
+ * The license for the original implementation of Stable-Diffusion-v2.1 can be found [here](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE).
248
+ * The license for the compiled assets for on-device deployment can be found [here](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE)
249
+
250
+
251
 
252
  ## References
253
  * [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)
254
  * [Source Model Implementation](https://github.com/CompVis/stable-diffusion/tree/main)
255
 
256
+
257
+
258
  ## Community
259
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
260
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).