library_name: pytorch
license: creativeml-openrail-m
pipeline_tag: unconditional-image-generation
tags:
- generative_ai
- quantized
- android
Stable-Diffusion-v2.1: Optimized for Mobile Deployment
State-of-the-art generative AI model used to generate detailed images conditioned on text descriptions
Generates high resolution images from text prompts using a latent diffusion model. This model uses CLIP ViT-L/14 as text encoder, U-Net based latent denoising, and VAE based decoder to generate the final image.
This model is an implementation of Stable-Diffusion-v2.1 found here.
This repository provides scripts to run Stable-Diffusion-v2.1 on Qualcomm® devices. More details on model performance across various devices, can be found here.
Model Details
- Model Type: Image generation
- Model Stats:
- Input: Text prompt to generate image
- Text Encoder Number of parameters: 340M
- UNet Number of parameters: 865M
- VAE Decoder Number of parameters: 83M
- Model size: 1GB
Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model |
---|---|---|---|---|---|---|---|---|
TextEncoder_Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 11.633 ms | 0 - 1 MB | INT8 | NPU | Stable-Diffusion-v2.1.bin |
TextEncoder_Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 7.759 ms | 0 - 8 MB | INT8 | NPU | Stable-Diffusion-v2.1.bin |
TextEncoder_Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 11.773 ms | 0 - 0 MB | INT8 | NPU | Use Export Script |
TextEncoder_Quantized | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 10.7 ms | 0 - 1 MB | UINT16 | NPU | Use Export Script |
VAEDecoder_Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 217.134 ms | 0 - 2 MB | INT8 | NPU | Stable-Diffusion-v2.1.bin |
VAEDecoder_Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 161.705 ms | 0 - 8 MB | INT8 | NPU | Stable-Diffusion-v2.1.bin |
VAEDecoder_Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 220.179 ms | 0 - 0 MB | INT8 | NPU | Use Export Script |
VAEDecoder_Quantized | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 225.416 ms | 0 - 2 MB | UINT16 | NPU | Use Export Script |
UNet_Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 101.094 ms | 0 - 2 MB | INT8 | NPU | Stable-Diffusion-v2.1.bin |
UNet_Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 72.62 ms | 0 - 8 MB | INT8 | NPU | Stable-Diffusion-v2.1.bin |
UNet_Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 102.486 ms | 0 - 0 MB | INT8 | NPU | Use Export Script |
UNet_Quantized | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 96.631 ms | 1 - 2 MB | UINT16 | NPU | Use Export Script |
Installation
This model can be installed as a Python package via pip.
pip install "qai-hub-models[stable_diffusion_v2_1_quantized]"
Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to Qualcomm® AI Hub with your
Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token
.
With this API token, you can configure your client to run models on the cloud hosted devices.
qai-hub configure --api_token API_TOKEN
Navigate to docs for more information.
Demo on-device
The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.
python -m qai_hub_models.models.stable_diffusion_v2_1_quantized.demo
The above demo runs a reference implementation of pre-processing, model inference, and post processing.
NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).
%run -m qai_hub_models.models.stable_diffusion_v2_1_quantized.demo
Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:
- Performance check on-device on a cloud-hosted device
- Downloads compiled assets that can be deployed on-device for Android.
- Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.stable_diffusion_v2_1_quantized.export
Profiling Results
------------------------------------------------------------
TextEncoder_Quantized
Device : Samsung Galaxy S23 (13)
Runtime : QNN
Estimated inference time (ms) : 11.6
Estimated peak memory usage (MB): [0, 1]
Total # Ops : 1040
Compute Unit(s) : NPU (1040 ops)
------------------------------------------------------------
VAEDecoder_Quantized
Device : Samsung Galaxy S23 (13)
Runtime : QNN
Estimated inference time (ms) : 217.1
Estimated peak memory usage (MB): [0, 2]
Total # Ops : 170
Compute Unit(s) : NPU (170 ops)
------------------------------------------------------------
UNet_Quantized
Device : Samsung Galaxy S23 (13)
Runtime : QNN
Estimated inference time (ms) : 101.1
Estimated peak memory usage (MB): [0, 2]
Total # Ops : 6361
Compute Unit(s) : NPU (6361 ops)
How does this work?
This export script leverages Qualcomm® AI Hub to optimize, validate, and deploy this model on-device. Lets go through each step below in detail:
Step 1: Upload compiled model
Upload compiled models from qai_hub_models.models.stable_diffusion_v2_1_quantized
on hub.
import torch
import qai_hub as hub
from qai_hub_models.models.stable_diffusion_v2_1_quantized import Model
# Load the model
model = Model.from_precompiled()
model_textencoder_quantized = hub.upload_model(model.text_encoder.get_target_model_path())
model_unet_quantized = hub.upload_model(model.unet.get_target_model_path())
model_vaedecoder_quantized = hub.upload_model(model.vae_decoder.get_target_model_path())
Step 2: Performance profiling on cloud-hosted device
After uploading compiled models from step 1. Models can be profiled model on-device using the
target_model
. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
# Device
device = hub.Device("Samsung Galaxy S23")
profile_job_textencoder_quantized = hub.submit_profile_job(
model=model_textencoder_quantized,
device=device,
)
profile_job_unet_quantized = hub.submit_profile_job(
model=model_unet_quantized,
device=device,
)
profile_job_vaedecoder_quantized = hub.submit_profile_job(
model=model_vaedecoder_quantized,
device=device,
)
Step 3: Verify on-device accuracy
To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device.
input_data_textencoder_quantized = model.text_encoder.sample_inputs()
inference_job_textencoder_quantized = hub.submit_inference_job(
model=model_textencoder_quantized,
device=device,
inputs=input_data_textencoder_quantized,
)
on_device_output_textencoder_quantized = inference_job_textencoder_quantized.download_output_data()
input_data_unet_quantized = model.unet.sample_inputs()
inference_job_unet_quantized = hub.submit_inference_job(
model=model_unet_quantized,
device=device,
inputs=input_data_unet_quantized,
)
on_device_output_unet_quantized = inference_job_unet_quantized.download_output_data()
input_data_vaedecoder_quantized = model.vae_decoder.sample_inputs()
inference_job_vaedecoder_quantized = hub.submit_inference_job(
model=model_vaedecoder_quantized,
device=device,
inputs=input_data_vaedecoder_quantized,
)
on_device_output_vaedecoder_quantized = inference_job_vaedecoder_quantized.download_output_data()
With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output.
Note: This on-device profiling and inference requires access to Qualcomm® AI Hub. Sign up for access.
Deploying compiled model to Android
The models can be deployed using multiple runtimes:
TensorFlow Lite (
.tflite
export): This tutorial provides a guide to deploy the .tflite model in an Android application.QNN (
.so
/.bin
export ): This sample app provides instructions on how to use the.so
shared library or.bin
context binary in an Android application.
View on Qualcomm® AI Hub
Get more details on Stable-Diffusion-v2.1's performance across various devices here. Explore all available models on Qualcomm® AI Hub
License
- The license for the original implementation of Stable-Diffusion-v2.1 can be found here.
- The license for the compiled assets for on-device deployment can be found here
References
Community
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.
Usage and Limitations
Model may not be used for or in connection with any of the following applications:
- Accessing essential private and public services and benefits;
- Administration of justice and democratic processes;
- Assessing or recognizing the emotional state of a person;
- Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics;
- Education and vocational training;
- Employment and workers management;
- Exploitation of the vulnerabilities of persons resulting in harmful behavior;
- General purpose social scoring;
- Law enforcement;
- Management and operation of critical infrastructure;
- Migration, asylum and border control management;
- Predictive policing;
- Real-time remote biometric identification in public spaces;
- Recommender systems of social media platforms;
- Scraping of facial images (from the internet or otherwise); and/or
- Subliminal manipulation