monai
medical
katielink commited on
Commit
9a69ce4
1 Parent(s): e02a92e

update the TensorRT part in the README file

Browse files
Files changed (3) hide show
  1. README.md +11 -3
  2. configs/metadata.json +2 -1
  3. docs/README.md +11 -3
README.md CHANGED
@@ -65,15 +65,23 @@ This model achieve the 0.91 accuracy on validation patches, and FROC of 0.72 on
65
 
66
  ![A Graph showing Train Acc, Train Loss, and Validation Acc](https://developer.download.nvidia.com/assets/Clara/Images/monai_pathology_tumor_detection_train_and_val_metrics_v3.png)
67
 
68
- The `pathology_tumor_detection` bundle supports the TensorRT acceleration. The table below shows the speedup ratios benchmarked on an A100 80G GPU, in which the `model computation` means the speedup ratio of model's inference with a random input without preprocessing and postprocessing and the `end2end` means run the bundle end to end with the TensorRT based model. The `torch_fp32` and `torch_amp` is for the pytorch model with or without `amp` mode. The `trt_fp32` and `trt_fp16` is for the TensorRT based model converted in corresponding precision. The `speedup amp`, `speedup fp32` and `speedup fp16` is the speedup ratio of corresponding models versus the pytorch float32 model, while the `amp vs fp16` is between the pytorch amp model and the TensorRT float16 based model.
69
 
70
- Please notice that the benchmark results are tested on one WSI image since the images are too large to benchmark. And the inference time in the end2end line stands for one patch of the whole image.
71
 
72
  | method | torch_fp32(ms) | torch_amp(ms) | trt_fp32(ms) | trt_fp16(ms) | speedup amp | speedup fp32 | speedup fp16 | amp vs fp16|
73
  | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
74
  | model computation |1.93 | 2.52 | 1.61 | 1.33 | 0.77 | 1.20 | 1.45 | 1.89 |
75
  | end2end |224.97 | 223.50 | 222.65 | 224.03 | 1.01 | 1.01 | 1.00 | 1.00 |
76
 
 
 
 
 
 
 
 
 
77
  This result is benchmarked under:
78
 
79
  - TensorRT: 8.5.3+cuda11.8
@@ -81,7 +89,7 @@ This result is benchmarked under:
81
  - CPU Architecture: x86-64
82
  - OS: ubuntu 20.04
83
  - Python version:3.8.10
84
- - CUDA version: 11.8
85
  - GPU models and configuration: A100 80G
86
 
87
  ## MONAI Bundle Commands
 
65
 
66
  ![A Graph showing Train Acc, Train Loss, and Validation Acc](https://developer.download.nvidia.com/assets/Clara/Images/monai_pathology_tumor_detection_train_and_val_metrics_v3.png)
67
 
68
+ The `pathology_tumor_detection` bundle supports the TensorRT acceleration. The table below shows the speedup ratios benchmarked on an A100 80G GPU.
69
 
70
+ Please notice that the benchmark results are tested on one WSI image since the images are too large to benchmark. And the inference time in the end-to-end line stands for one patch of the whole image.
71
 
72
  | method | torch_fp32(ms) | torch_amp(ms) | trt_fp32(ms) | trt_fp16(ms) | speedup amp | speedup fp32 | speedup fp16 | amp vs fp16|
73
  | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
74
  | model computation |1.93 | 2.52 | 1.61 | 1.33 | 0.77 | 1.20 | 1.45 | 1.89 |
75
  | end2end |224.97 | 223.50 | 222.65 | 224.03 | 1.01 | 1.01 | 1.00 | 1.00 |
76
 
77
+ Where:
78
+ - `model computation` means the speedup ratio of model's inference with a random input without preprocessing and postprocessing
79
+ - `end2end` means run the bundle end-to-end with the TensorRT based model.
80
+ - `torch_fp32` and `torch_amp` are for the PyTorch models with or without `amp` mode.
81
+ - `trt_fp32` and `trt_fp16` are for the TensorRT based models converted in corresponding precision.
82
+ - `speedup amp`, `speedup fp32` and `speedup fp16` are the speedup ratios of corresponding models versus the PyTorch float32 model
83
+ - `amp vs fp16` is the speedup ratio between the PyTorch amp model and the TensorRT float16 based model.
84
+
85
  This result is benchmarked under:
86
 
87
  - TensorRT: 8.5.3+cuda11.8
 
89
  - CPU Architecture: x86-64
90
  - OS: ubuntu 20.04
91
  - Python version:3.8.10
92
+ - CUDA version: 12.0
93
  - GPU models and configuration: A100 80G
94
 
95
  ## MONAI Bundle Commands
configs/metadata.json CHANGED
@@ -1,7 +1,8 @@
1
  {
2
  "schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
3
- "version": "0.5.0",
4
  "changelog": {
 
5
  "0.5.0": "add the command of executing inference with TensorRT models",
6
  "0.4.9": "adapt to BundleWorkflow interface",
7
  "0.4.8": "update the readme file with TensorRT convert",
 
1
  {
2
  "schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
3
+ "version": "0.5.1",
4
  "changelog": {
5
+ "0.5.1": "update the TensorRT part in the README file",
6
  "0.5.0": "add the command of executing inference with TensorRT models",
7
  "0.4.9": "adapt to BundleWorkflow interface",
8
  "0.4.8": "update the readme file with TensorRT convert",
docs/README.md CHANGED
@@ -58,15 +58,23 @@ This model achieve the 0.91 accuracy on validation patches, and FROC of 0.72 on
58
 
59
  ![A Graph showing Train Acc, Train Loss, and Validation Acc](https://developer.download.nvidia.com/assets/Clara/Images/monai_pathology_tumor_detection_train_and_val_metrics_v3.png)
60
 
61
- The `pathology_tumor_detection` bundle supports the TensorRT acceleration. The table below shows the speedup ratios benchmarked on an A100 80G GPU, in which the `model computation` means the speedup ratio of model's inference with a random input without preprocessing and postprocessing and the `end2end` means run the bundle end to end with the TensorRT based model. The `torch_fp32` and `torch_amp` is for the pytorch model with or without `amp` mode. The `trt_fp32` and `trt_fp16` is for the TensorRT based model converted in corresponding precision. The `speedup amp`, `speedup fp32` and `speedup fp16` is the speedup ratio of corresponding models versus the pytorch float32 model, while the `amp vs fp16` is between the pytorch amp model and the TensorRT float16 based model.
62
 
63
- Please notice that the benchmark results are tested on one WSI image since the images are too large to benchmark. And the inference time in the end2end line stands for one patch of the whole image.
64
 
65
  | method | torch_fp32(ms) | torch_amp(ms) | trt_fp32(ms) | trt_fp16(ms) | speedup amp | speedup fp32 | speedup fp16 | amp vs fp16|
66
  | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
67
  | model computation |1.93 | 2.52 | 1.61 | 1.33 | 0.77 | 1.20 | 1.45 | 1.89 |
68
  | end2end |224.97 | 223.50 | 222.65 | 224.03 | 1.01 | 1.01 | 1.00 | 1.00 |
69
 
 
 
 
 
 
 
 
 
70
  This result is benchmarked under:
71
 
72
  - TensorRT: 8.5.3+cuda11.8
@@ -74,7 +82,7 @@ This result is benchmarked under:
74
  - CPU Architecture: x86-64
75
  - OS: ubuntu 20.04
76
  - Python version:3.8.10
77
- - CUDA version: 11.8
78
  - GPU models and configuration: A100 80G
79
 
80
  ## MONAI Bundle Commands
 
58
 
59
  ![A Graph showing Train Acc, Train Loss, and Validation Acc](https://developer.download.nvidia.com/assets/Clara/Images/monai_pathology_tumor_detection_train_and_val_metrics_v3.png)
60
 
61
+ The `pathology_tumor_detection` bundle supports the TensorRT acceleration. The table below shows the speedup ratios benchmarked on an A100 80G GPU.
62
 
63
+ Please notice that the benchmark results are tested on one WSI image since the images are too large to benchmark. And the inference time in the end-to-end line stands for one patch of the whole image.
64
 
65
  | method | torch_fp32(ms) | torch_amp(ms) | trt_fp32(ms) | trt_fp16(ms) | speedup amp | speedup fp32 | speedup fp16 | amp vs fp16|
66
  | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
67
  | model computation |1.93 | 2.52 | 1.61 | 1.33 | 0.77 | 1.20 | 1.45 | 1.89 |
68
  | end2end |224.97 | 223.50 | 222.65 | 224.03 | 1.01 | 1.01 | 1.00 | 1.00 |
69
 
70
+ Where:
71
+ - `model computation` means the speedup ratio of model's inference with a random input without preprocessing and postprocessing
72
+ - `end2end` means run the bundle end-to-end with the TensorRT based model.
73
+ - `torch_fp32` and `torch_amp` are for the PyTorch models with or without `amp` mode.
74
+ - `trt_fp32` and `trt_fp16` are for the TensorRT based models converted in corresponding precision.
75
+ - `speedup amp`, `speedup fp32` and `speedup fp16` are the speedup ratios of corresponding models versus the PyTorch float32 model
76
+ - `amp vs fp16` is the speedup ratio between the PyTorch amp model and the TensorRT float16 based model.
77
+
78
  This result is benchmarked under:
79
 
80
  - TensorRT: 8.5.3+cuda11.8
 
82
  - CPU Architecture: x86-64
83
  - OS: ubuntu 20.04
84
  - Python version:3.8.10
85
+ - CUDA version: 12.0
86
  - GPU models and configuration: A100 80G
87
 
88
  ## MONAI Bundle Commands