Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -36,8 +36,10 @@ More details on model performance across various devices, can be found
|
|
36 |
|
37 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
38 |
| ---|---|---|---|---|---|---|---|
|
39 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite |
|
40 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite |
|
|
|
|
|
41 |
|
42 |
|
43 |
## Installation
|
@@ -98,17 +100,17 @@ python -m qai_hub_models.models.whisper_tiny_en.export
|
|
98 |
```
|
99 |
Profile Job summary of WhisperEncoder
|
100 |
--------------------------------------------------
|
101 |
-
Device:
|
102 |
-
Estimated Inference Time:
|
103 |
-
Estimated Peak Memory Range:
|
104 |
-
Compute Units:
|
105 |
|
106 |
Profile Job summary of WhisperDecoder
|
107 |
--------------------------------------------------
|
108 |
-
Device:
|
109 |
-
Estimated Inference Time:
|
110 |
-
Estimated Peak Memory Range:
|
111 |
-
Compute Units: NPU (
|
112 |
|
113 |
|
114 |
```
|
|
|
36 |
|
37 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
38 |
| ---|---|---|---|---|---|---|---|
|
39 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 68.887 ms | 11 - 54 MB | FP16 | GPU | [WhisperEncoder.tflite](https://huggingface.co/qualcomm/Whisper-Tiny-En/blob/main/WhisperEncoder.tflite)
|
40 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 3.871 ms | 3 - 5 MB | FP16 | NPU | [WhisperDecoder.tflite](https://huggingface.co/qualcomm/Whisper-Tiny-En/blob/main/WhisperDecoder.tflite)
|
41 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 288.969 ms | 0 - 52 MB | FP16 | NPU | [WhisperEncoder.so](https://huggingface.co/qualcomm/Whisper-Tiny-En/blob/main/WhisperEncoder.so)
|
42 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 3.646 ms | 9 - 45 MB | FP16 | NPU | [WhisperDecoder.so](https://huggingface.co/qualcomm/Whisper-Tiny-En/blob/main/WhisperDecoder.so)
|
43 |
|
44 |
|
45 |
## Installation
|
|
|
100 |
```
|
101 |
Profile Job summary of WhisperEncoder
|
102 |
--------------------------------------------------
|
103 |
+
Device: Snapdragon X Elite CRD (11)
|
104 |
+
Estimated Inference Time: 240.12 ms
|
105 |
+
Estimated Peak Memory Range: 0.92-0.92 MB
|
106 |
+
Compute Units: NPU (337) | Total (337)
|
107 |
|
108 |
Profile Job summary of WhisperDecoder
|
109 |
--------------------------------------------------
|
110 |
+
Device: Snapdragon X Elite CRD (11)
|
111 |
+
Estimated Inference Time: 3.82 ms
|
112 |
+
Estimated Peak Memory Range: 20.25-20.25 MB
|
113 |
+
Compute Units: NPU (447) | Total (447)
|
114 |
|
115 |
|
116 |
```
|