qaihm-bot commited on
Commit
d3d01c5
·
verified ·
1 Parent(s): 1bcc837

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +40 -19
README.md CHANGED
@@ -14,7 +14,7 @@ tags:
14
 
15
  QuickSRNet Medium is designed for upscaling images on mobile platforms to sharpen in real-time.
16
 
17
- This model is an implementation of QuickSRNetMedium found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet).
18
  This repository provides scripts to run QuickSRNetMedium on Qualcomm® devices.
19
  More details on model performance across various devices, can be found
20
  [here](https://aihub.qualcomm.com/models/quicksrnetmedium).
@@ -29,15 +29,32 @@ More details on model performance across various devices, can be found
29
  - Number of parameters: 55.0K
30
  - Model size: 220 KB
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
 
34
 
35
- | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
- | ---|---|---|---|---|---|---|---|
37
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 1.334 ms | 0 - 1 MB | FP16 | NPU | [QuickSRNetMedium.tflite](https://huggingface.co/qualcomm/QuickSRNetMedium/blob/main/QuickSRNetMedium.tflite)
38
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.994 ms | 0 - 7 MB | FP16 | NPU | [QuickSRNetMedium.so](https://huggingface.co/qualcomm/QuickSRNetMedium/blob/main/QuickSRNetMedium.so)
39
-
40
-
41
 
42
  ## Installation
43
 
@@ -92,16 +109,16 @@ device. This script does the following:
92
  ```bash
93
  python -m qai_hub_models.models.quicksrnetmedium.export
94
  ```
95
-
96
  ```
97
- Profile Job summary of QuickSRNetMedium
98
- --------------------------------------------------
99
- Device: Snapdragon X Elite CRD (11)
100
- Estimated Inference Time: 1.04 ms
101
- Estimated Peak Memory Range: 0.20-0.20 MB
102
- Compute Units: NPU (17) | Total (17)
103
-
104
-
 
105
  ```
106
 
107
 
@@ -200,15 +217,19 @@ provides instructions on how to use the `.so` shared library in an Android appl
200
  Get more details on QuickSRNetMedium's performance across various devices [here](https://aihub.qualcomm.com/models/quicksrnetmedium).
201
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
202
 
 
203
  ## License
204
- - The license for the original implementation of QuickSRNetMedium can be found
205
- [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
206
- - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
 
207
 
208
  ## References
209
  * [QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms](https://arxiv.org/abs/2303.04336)
210
  * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet)
211
 
 
 
212
  ## Community
213
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
214
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
 
14
 
15
  QuickSRNet Medium is designed for upscaling images on mobile platforms to sharpen in real-time.
16
 
17
+ This model is an implementation of QuickSRNetMedium found [here]({source_repo}).
18
  This repository provides scripts to run QuickSRNetMedium on Qualcomm® devices.
19
  More details on model performance across various devices, can be found
20
  [here](https://aihub.qualcomm.com/models/quicksrnetmedium).
 
29
  - Number of parameters: 55.0K
30
  - Model size: 220 KB
31
 
32
+ | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
33
+ |---|---|---|---|---|---|---|---|---|
34
+ | QuickSRNetMedium | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 1.359 ms | 0 - 2 MB | FP16 | NPU | [QuickSRNetMedium.tflite](https://huggingface.co/qualcomm/QuickSRNetMedium/blob/main/QuickSRNetMedium.tflite) |
35
+ | QuickSRNetMedium | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 1.017 ms | 0 - 3 MB | FP16 | NPU | [QuickSRNetMedium.so](https://huggingface.co/qualcomm/QuickSRNetMedium/blob/main/QuickSRNetMedium.so) |
36
+ | QuickSRNetMedium | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 1.512 ms | 0 - 6 MB | FP16 | NPU | [QuickSRNetMedium.onnx](https://huggingface.co/qualcomm/QuickSRNetMedium/blob/main/QuickSRNetMedium.onnx) |
37
+ | QuickSRNetMedium | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 0.981 ms | 0 - 22 MB | FP16 | NPU | [QuickSRNetMedium.tflite](https://huggingface.co/qualcomm/QuickSRNetMedium/blob/main/QuickSRNetMedium.tflite) |
38
+ | QuickSRNetMedium | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 0.674 ms | 0 - 11 MB | FP16 | NPU | [QuickSRNetMedium.so](https://huggingface.co/qualcomm/QuickSRNetMedium/blob/main/QuickSRNetMedium.so) |
39
+ | QuickSRNetMedium | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 1.084 ms | 0 - 24 MB | FP16 | NPU | [QuickSRNetMedium.onnx](https://huggingface.co/qualcomm/QuickSRNetMedium/blob/main/QuickSRNetMedium.onnx) |
40
+ | QuickSRNetMedium | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 1.333 ms | 0 - 1 MB | FP16 | NPU | [QuickSRNetMedium.tflite](https://huggingface.co/qualcomm/QuickSRNetMedium/blob/main/QuickSRNetMedium.tflite) |
41
+ | QuickSRNetMedium | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 0.91 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
42
+ | QuickSRNetMedium | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 1.366 ms | 0 - 1 MB | FP16 | NPU | [QuickSRNetMedium.tflite](https://huggingface.co/qualcomm/QuickSRNetMedium/blob/main/QuickSRNetMedium.tflite) |
43
+ | QuickSRNetMedium | SA8255 (Proxy) | SA8255P Proxy | QNN | 0.932 ms | 0 - 2 MB | FP16 | NPU | Use Export Script |
44
+ | QuickSRNetMedium | SA8775 (Proxy) | SA8775P Proxy | TFLITE | 1.411 ms | 0 - 1 MB | FP16 | NPU | [QuickSRNetMedium.tflite](https://huggingface.co/qualcomm/QuickSRNetMedium/blob/main/QuickSRNetMedium.tflite) |
45
+ | QuickSRNetMedium | SA8775 (Proxy) | SA8775P Proxy | QNN | 1.003 ms | 0 - 2 MB | FP16 | NPU | Use Export Script |
46
+ | QuickSRNetMedium | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 1.324 ms | 0 - 1 MB | FP16 | NPU | [QuickSRNetMedium.tflite](https://huggingface.co/qualcomm/QuickSRNetMedium/blob/main/QuickSRNetMedium.tflite) |
47
+ | QuickSRNetMedium | SA8650 (Proxy) | SA8650P Proxy | QNN | 0.925 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
48
+ | QuickSRNetMedium | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 2.746 ms | 6 - 28 MB | FP16 | NPU | [QuickSRNetMedium.tflite](https://huggingface.co/qualcomm/QuickSRNetMedium/blob/main/QuickSRNetMedium.tflite) |
49
+ | QuickSRNetMedium | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 1.234 ms | 0 - 14 MB | FP16 | NPU | Use Export Script |
50
+ | QuickSRNetMedium | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 0.971 ms | 0 - 15 MB | FP16 | NPU | [QuickSRNetMedium.tflite](https://huggingface.co/qualcomm/QuickSRNetMedium/blob/main/QuickSRNetMedium.tflite) |
51
+ | QuickSRNetMedium | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 0.684 ms | 0 - 8 MB | FP16 | NPU | Use Export Script |
52
+ | QuickSRNetMedium | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 0.925 ms | 0 - 15 MB | FP16 | NPU | [QuickSRNetMedium.onnx](https://huggingface.co/qualcomm/QuickSRNetMedium/blob/main/QuickSRNetMedium.onnx) |
53
+ | QuickSRNetMedium | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 1.035 ms | 0 - 0 MB | FP16 | NPU | Use Export Script |
54
+ | QuickSRNetMedium | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 1.552 ms | 9 - 9 MB | FP16 | NPU | [QuickSRNetMedium.onnx](https://huggingface.co/qualcomm/QuickSRNetMedium/blob/main/QuickSRNetMedium.onnx) |
55
 
56
 
57
 
 
 
 
 
 
 
58
 
59
  ## Installation
60
 
 
109
  ```bash
110
  python -m qai_hub_models.models.quicksrnetmedium.export
111
  ```
 
112
  ```
113
+ Profiling Results
114
+ ------------------------------------------------------------
115
+ QuickSRNetMedium
116
+ Device : Samsung Galaxy S23 (13)
117
+ Runtime : TFLITE
118
+ Estimated inference time (ms) : 1.4
119
+ Estimated peak memory usage (MB): [0, 2]
120
+ Total # Ops : 17
121
+ Compute Unit(s) : NPU (14 ops) CPU (3 ops)
122
  ```
123
 
124
 
 
217
  Get more details on QuickSRNetMedium's performance across various devices [here](https://aihub.qualcomm.com/models/quicksrnetmedium).
218
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
219
 
220
+
221
  ## License
222
+ * The license for the original implementation of QuickSRNetMedium can be found [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
223
+ * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
224
+
225
+
226
 
227
  ## References
228
  * [QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms](https://arxiv.org/abs/2303.04336)
229
  * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet)
230
 
231
+
232
+
233
  ## Community
234
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
235
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).