Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -36,8 +36,8 @@ More details on model performance across various devices, can be found
|
|
36 |
|
37 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
38 |
| ---|---|---|---|---|---|---|---|
|
39 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.
|
40 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.
|
41 |
|
42 |
|
43 |
## Installation
|
@@ -45,10 +45,11 @@ More details on model performance across various devices, can be found
|
|
45 |
This model can be installed as a Python package via pip.
|
46 |
|
47 |
```bash
|
48 |
-
pip install qai-hub-models
|
49 |
```
|
50 |
|
51 |
|
|
|
52 |
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
|
53 |
|
54 |
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
|
@@ -98,8 +99,8 @@ python -m qai_hub_models.models.googlenet_quantized.export
|
|
98 |
Profile Job summary of GoogLeNetQuantized
|
99 |
--------------------------------------------------
|
100 |
Device: Snapdragon X Elite CRD (11)
|
101 |
-
Estimated Inference Time: 0.
|
102 |
-
Estimated Peak Memory Range: 0.
|
103 |
Compute Units: NPU (86) | Total (86)
|
104 |
|
105 |
|
|
|
36 |
|
37 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
38 |
| ---|---|---|---|---|---|---|---|
|
39 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.296 ms | 0 - 1 MB | INT8 | NPU | [GoogLeNetQuantized.tflite](https://huggingface.co/qualcomm/GoogLeNetQuantized/blob/main/GoogLeNetQuantized.tflite)
|
40 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.337 ms | 0 - 4 MB | INT8 | NPU | [GoogLeNetQuantized.so](https://huggingface.co/qualcomm/GoogLeNetQuantized/blob/main/GoogLeNetQuantized.so)
|
41 |
|
42 |
|
43 |
## Installation
|
|
|
45 |
This model can be installed as a Python package via pip.
|
46 |
|
47 |
```bash
|
48 |
+
pip install "qai-hub-models[googlenet_quantized]"
|
49 |
```
|
50 |
|
51 |
|
52 |
+
|
53 |
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
|
54 |
|
55 |
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
|
|
|
99 |
Profile Job summary of GoogLeNetQuantized
|
100 |
--------------------------------------------------
|
101 |
Device: Snapdragon X Elite CRD (11)
|
102 |
+
Estimated Inference Time: 0.46 ms
|
103 |
+
Estimated Peak Memory Range: 0.49-0.49 MB
|
104 |
Compute Units: NPU (86) | Total (86)
|
105 |
|
106 |
|