qaihm-bot commited on
Commit
b462a9e
1 Parent(s): 1739d8b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +8 -80
README.md CHANGED
@@ -34,10 +34,13 @@ More details on model performance across various devices, can be found
34
  - Model size: 6.55 MB
35
 
36
 
 
 
37
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  | ---|---|---|---|---|---|---|---|
39
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.296 ms | 0 - 1 MB | INT8 | NPU | [GoogLeNetQuantized.tflite](https://huggingface.co/qualcomm/GoogLeNetQuantized/blob/main/GoogLeNetQuantized.tflite)
40
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.337 ms | 0 - 4 MB | INT8 | NPU | [GoogLeNetQuantized.so](https://huggingface.co/qualcomm/GoogLeNetQuantized/blob/main/GoogLeNetQuantized.so)
 
41
 
42
 
43
  ## Installation
@@ -99,89 +102,14 @@ python -m qai_hub_models.models.googlenet_quantized.export
99
  Profile Job summary of GoogLeNetQuantized
100
  --------------------------------------------------
101
  Device: Snapdragon X Elite CRD (11)
102
- Estimated Inference Time: 0.46 ms
103
- Estimated Peak Memory Range: 0.49-0.49 MB
104
  Compute Units: NPU (86) | Total (86)
105
 
106
 
107
  ```
108
- ## How does this work?
109
-
110
- This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/GoogLeNetQuantized/export.py)
111
- leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
112
- on-device. Lets go through each step below in detail:
113
-
114
- Step 1: **Compile model for on-device deployment**
115
-
116
- To compile a PyTorch model for on-device deployment, we first trace the model
117
- in memory using the `jit.trace` and then call the `submit_compile_job` API.
118
-
119
- ```python
120
- import torch
121
-
122
- import qai_hub as hub
123
- from qai_hub_models.models.googlenet_quantized import Model
124
-
125
- # Load the model
126
- torch_model = Model.from_pretrained()
127
- torch_model.eval()
128
-
129
- # Device
130
- device = hub.Device("Samsung Galaxy S23")
131
-
132
- # Trace model
133
- input_shape = torch_model.get_input_spec()
134
- sample_inputs = torch_model.sample_inputs()
135
-
136
- pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
137
-
138
- # Compile model on a specific device
139
- compile_job = hub.submit_compile_job(
140
- model=pt_model,
141
- device=device,
142
- input_specs=torch_model.get_input_spec(),
143
- )
144
 
145
- # Get target model to run on-device
146
- target_model = compile_job.get_target_model()
147
-
148
- ```
149
-
150
-
151
- Step 2: **Performance profiling on cloud-hosted device**
152
-
153
- After compiling models from step 1. Models can be profiled model on-device using the
154
- `target_model`. Note that this scripts runs the model on a device automatically
155
- provisioned in the cloud. Once the job is submitted, you can navigate to a
156
- provided job URL to view a variety of on-device performance metrics.
157
- ```python
158
- profile_job = hub.submit_profile_job(
159
- model=target_model,
160
- device=device,
161
- )
162
-
163
- ```
164
-
165
- Step 3: **Verify on-device accuracy**
166
-
167
- To verify the accuracy of the model on-device, you can run on-device inference
168
- on sample input data on the same cloud hosted device.
169
- ```python
170
- input_data = torch_model.sample_inputs()
171
- inference_job = hub.submit_inference_job(
172
- model=target_model,
173
- device=device,
174
- inputs=input_data,
175
- )
176
-
177
- on_device_output = inference_job.download_output_data()
178
-
179
- ```
180
- With the output of the model, you can compute like PSNR, relative errors or
181
- spot check the output with expected output.
182
 
183
- **Note**: This on-device profiling and inference requires access to Qualcomm®
184
- AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
185
 
186
 
187
  ## Run demo on a cloud-hosted device
@@ -220,7 +148,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
220
  ## License
221
  - The license for the original implementation of GoogLeNetQuantized can be found
222
  [here](https://github.com/pytorch/vision/blob/main/LICENSE).
223
- - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
224
 
225
  ## References
226
  * [Going Deeper with Convolutions](https://arxiv.org/abs/1409.4842)
 
34
  - Model size: 6.55 MB
35
 
36
 
37
+
38
+
39
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
40
  | ---|---|---|---|---|---|---|---|
41
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.298 ms | 0 - 1 MB | INT8 | NPU | [GoogLeNetQuantized.tflite](https://huggingface.co/qualcomm/GoogLeNetQuantized/blob/main/GoogLeNetQuantized.tflite)
42
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.342 ms | 0 - 10 MB | INT8 | NPU | [GoogLeNetQuantized.so](https://huggingface.co/qualcomm/GoogLeNetQuantized/blob/main/GoogLeNetQuantized.so)
43
+
44
 
45
 
46
  ## Installation
 
102
  Profile Job summary of GoogLeNetQuantized
103
  --------------------------------------------------
104
  Device: Snapdragon X Elite CRD (11)
105
+ Estimated Inference Time: 0.44 ms
106
+ Estimated Peak Memory Range: 0.51-0.51 MB
107
  Compute Units: NPU (86) | Total (86)
108
 
109
 
110
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
111
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
112
 
 
 
113
 
114
 
115
  ## Run demo on a cloud-hosted device
 
148
  ## License
149
  - The license for the original implementation of GoogLeNetQuantized can be found
150
  [here](https://github.com/pytorch/vision/blob/main/LICENSE).
151
+ - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
152
 
153
  ## References
154
  * [Going Deeper with Convolutions](https://arxiv.org/abs/1409.4842)