Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -35,6 +35,7 @@ More details on model performance across various devices, can be found
|
|
35 |
|
36 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
37 |
| ---|---|---|---|---|---|---|---|
|
|
|
38 |
|
39 |
|
40 |
|
@@ -93,6 +94,16 @@ device. This script does the following:
|
|
93 |
python -m qai_hub_models.models.fastsam_x.export
|
94 |
```
|
95 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
96 |
|
97 |
|
98 |
## How does this work?
|
@@ -110,29 +121,13 @@ in memory using the `jit.trace` and then call the `submit_compile_job` API.
|
|
110 |
import torch
|
111 |
|
112 |
import qai_hub as hub
|
113 |
-
from qai_hub_models.models.fastsam_x import
|
114 |
|
115 |
# Load the model
|
116 |
-
torch_model = Model.from_pretrained()
|
117 |
|
118 |
# Device
|
119 |
device = hub.Device("Samsung Galaxy S23")
|
120 |
|
121 |
-
# Trace model
|
122 |
-
input_shape = torch_model.get_input_spec()
|
123 |
-
sample_inputs = torch_model.sample_inputs()
|
124 |
-
|
125 |
-
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
|
126 |
-
|
127 |
-
# Compile model on a specific device
|
128 |
-
compile_job = hub.submit_compile_job(
|
129 |
-
model=pt_model,
|
130 |
-
device=device,
|
131 |
-
input_specs=torch_model.get_input_spec(),
|
132 |
-
)
|
133 |
-
|
134 |
-
# Get target model to run on-device
|
135 |
-
target_model = compile_job.get_target_model()
|
136 |
|
137 |
```
|
138 |
|
@@ -145,10 +140,10 @@ provisioned in the cloud. Once the job is submitted, you can navigate to a
|
|
145 |
provided job URL to view a variety of on-device performance metrics.
|
146 |
```python
|
147 |
profile_job = hub.submit_profile_job(
|
148 |
-
|
149 |
-
|
150 |
-
)
|
151 |
-
|
152 |
```
|
153 |
|
154 |
Step 3: **Verify on-device accuracy**
|
@@ -158,12 +153,11 @@ on sample input data on the same cloud hosted device.
|
|
158 |
```python
|
159 |
input_data = torch_model.sample_inputs()
|
160 |
inference_job = hub.submit_inference_job(
|
161 |
-
|
162 |
-
|
163 |
-
|
164 |
-
)
|
165 |
-
|
166 |
-
on_device_output = inference_job.download_output_data()
|
167 |
|
168 |
```
|
169 |
With the output of the model, you can compute like PSNR, relative errors or
|
|
|
35 |
|
36 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
37 |
| ---|---|---|---|---|---|---|---|
|
38 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 45.669 ms | 5 - 22 MB | FP16 | NPU | [FastSam-X.so](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.so)
|
39 |
|
40 |
|
41 |
|
|
|
94 |
python -m qai_hub_models.models.fastsam_x.export
|
95 |
```
|
96 |
|
97 |
+
```
|
98 |
+
Profile Job summary of FastSam-X
|
99 |
+
--------------------------------------------------
|
100 |
+
Device: Snapdragon X Elite CRD (11)
|
101 |
+
Estimated Inference Time: 44.54 ms
|
102 |
+
Estimated Peak Memory Range: 4.70-4.70 MB
|
103 |
+
Compute Units: NPU (418) | Total (418)
|
104 |
+
|
105 |
+
|
106 |
+
```
|
107 |
|
108 |
|
109 |
## How does this work?
|
|
|
121 |
import torch
|
122 |
|
123 |
import qai_hub as hub
|
124 |
+
from qai_hub_models.models.fastsam_x import
|
125 |
|
126 |
# Load the model
|
|
|
127 |
|
128 |
# Device
|
129 |
device = hub.Device("Samsung Galaxy S23")
|
130 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
131 |
|
132 |
```
|
133 |
|
|
|
140 |
provided job URL to view a variety of on-device performance metrics.
|
141 |
```python
|
142 |
profile_job = hub.submit_profile_job(
|
143 |
+
model=target_model,
|
144 |
+
device=device,
|
145 |
+
)
|
146 |
+
|
147 |
```
|
148 |
|
149 |
Step 3: **Verify on-device accuracy**
|
|
|
153 |
```python
|
154 |
input_data = torch_model.sample_inputs()
|
155 |
inference_job = hub.submit_inference_job(
|
156 |
+
model=target_model,
|
157 |
+
device=device,
|
158 |
+
inputs=input_data,
|
159 |
+
)
|
160 |
+
on_device_output = inference_job.download_output_data()
|
|
|
161 |
|
162 |
```
|
163 |
With the output of the model, you can compute like PSNR, relative errors or
|