qaihm-bot commited on
Commit
30eb1cc
1 Parent(s): 4a85ffb

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +57 -6
README.md CHANGED
@@ -144,9 +144,11 @@ This [export script](https://aihub.qualcomm.com/models/stable_diffusion_v2_1_qua
144
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
145
  on-device. Lets go through each step below in detail:
146
 
147
- Step 1: **Upload compiled model**
 
 
 
148
 
149
- Upload compiled models from `qai_hub_models.models.stable_diffusion_v2_1_quantized` on hub.
150
  ```python
151
  import torch
152
 
@@ -154,11 +156,60 @@ import qai_hub as hub
154
  from qai_hub_models.models.stable_diffusion_v2_1_quantized import Model
155
 
156
  # Load the model
157
- model = Model.from_precompiled()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
158
 
159
- model_textencoder_quantized = hub.upload_model(model.text_encoder.get_target_model_path())
160
- model_unet_quantized = hub.upload_model(model.unet.get_target_model_path())
161
- model_vaedecoder_quantized = hub.upload_model(model.vae_decoder.get_target_model_path())
162
  ```
163
 
164
 
 
144
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
145
  on-device. Lets go through each step below in detail:
146
 
147
+ Step 1: **Compile model for on-device deployment**
148
+
149
+ To compile a PyTorch model for on-device deployment, we first trace the model
150
+ in memory using the `jit.trace` and then call the `submit_compile_job` API.
151
 
 
152
  ```python
153
  import torch
154
 
 
156
  from qai_hub_models.models.stable_diffusion_v2_1_quantized import Model
157
 
158
  # Load the model
159
+ model = Model.from_pretrained()
160
+ text_encoder_model = model.text_encoder
161
+ unet_model = model.unet
162
+ vae_decoder_model = model.vae_decoder
163
+
164
+ # Device
165
+ device = hub.Device("Samsung Galaxy S23")
166
+
167
+ # Trace model
168
+ text_encoder_input_shape = text_encoder_model.get_input_spec()
169
+ text_encoder_sample_inputs = text_encoder_model.sample_inputs()
170
+
171
+ traced_text_encoder_model = torch.jit.trace(text_encoder_model, [torch.tensor(data[0]) for _, data in text_encoder_sample_inputs.items()])
172
+
173
+ # Compile model on a specific device
174
+ text_encoder_compile_job = hub.submit_compile_job(
175
+ model=traced_text_encoder_model ,
176
+ device=device,
177
+ input_specs=text_encoder_model.get_input_spec(),
178
+ )
179
+
180
+ # Get target model to run on-device
181
+ text_encoder_target_model = text_encoder_compile_job.get_target_model()
182
+ # Trace model
183
+ unet_input_shape = unet_model.get_input_spec()
184
+ unet_sample_inputs = unet_model.sample_inputs()
185
+
186
+ traced_unet_model = torch.jit.trace(unet_model, [torch.tensor(data[0]) for _, data in unet_sample_inputs.items()])
187
+
188
+ # Compile model on a specific device
189
+ unet_compile_job = hub.submit_compile_job(
190
+ model=traced_unet_model ,
191
+ device=device,
192
+ input_specs=unet_model.get_input_spec(),
193
+ )
194
+
195
+ # Get target model to run on-device
196
+ unet_target_model = unet_compile_job.get_target_model()
197
+ # Trace model
198
+ vae_decoder_input_shape = vae_decoder_model.get_input_spec()
199
+ vae_decoder_sample_inputs = vae_decoder_model.sample_inputs()
200
+
201
+ traced_vae_decoder_model = torch.jit.trace(vae_decoder_model, [torch.tensor(data[0]) for _, data in vae_decoder_sample_inputs.items()])
202
+
203
+ # Compile model on a specific device
204
+ vae_decoder_compile_job = hub.submit_compile_job(
205
+ model=traced_vae_decoder_model ,
206
+ device=device,
207
+ input_specs=vae_decoder_model.get_input_spec(),
208
+ )
209
+
210
+ # Get target model to run on-device
211
+ vae_decoder_target_model = vae_decoder_compile_job.get_target_model()
212
 
 
 
 
213
  ```
214
 
215