complete the model package
Browse files- README.md +118 -0
- configs/inference.json +153 -0
- configs/logging.conf +21 -0
- configs/metadata.json +79 -0
- configs/train.json +373 -0
- docs/README.md +111 -0
- models/model.pt +3 -0
- scripts/center_crop.py +83 -0
README.md
ADDED
@@ -0,0 +1,118 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- monai
|
4 |
+
- medical
|
5 |
+
library_name: monai
|
6 |
+
license: unknown
|
7 |
+
---
|
8 |
+
# Prostate MRI zonal segmentation
|
9 |
+
|
10 |
+
### **Authors**
|
11 |
+
|
12 |
+
Lisa C. Adams, Keno K. Bressem
|
13 |
+
|
14 |
+
### **Tags**
|
15 |
+
|
16 |
+
Segmentation, MR, Prostate
|
17 |
+
|
18 |
+
## **Model Description**
|
19 |
+
This model was trained with the UNet architecture [1] and is used for 3D volumetric segmentation of the anatomical prostate zones on T2w MRI images. The segmentation of the anatomical regions is formulated as a voxel-wise classification. Each voxel is classified as either central gland (1), peripheral zone (2), or background (0). The model is optimized using a gradient descent method that minimizes the focal soft-dice loss between the predicted mask and the actual segmentation.
|
20 |
+
|
21 |
+
## **Data**
|
22 |
+
The model was trained in the prostate158 training data, which is available at https://doi.org/10.5281/zenodo.6481141. Only T2w images were used for this task.
|
23 |
+
|
24 |
+
|
25 |
+
### **Preprocessing**
|
26 |
+
MRI images in the prostate158 dataset were preprocessed, including center cropping and resampling. When applying the model to new data, this preprocessing should be repeated.
|
27 |
+
|
28 |
+
#### **Center cropping**
|
29 |
+
T2w images were acquired with a voxel spacing of 0.47 x 0.47 x 3 mm and an axial FOV size of 180 x 180 mm. However, the prostate rarely exceeds an axial diameter of 100 mm, and for zonal segmentation, the tissue surrounding the prostate is not of interest and only increases the image size and thus the computational cost. Center-cropping can reduce the image size without sacrificing information.
|
30 |
+
|
31 |
+
The script `center_crop.py` allows to reproduce center-cropping as performed in the prostate158 paper.
|
32 |
+
|
33 |
+
```bash
|
34 |
+
python scripts/center_crop.py --file_name path/to/t2_image --out_name cropped_t2
|
35 |
+
```
|
36 |
+
|
37 |
+
#### **Resampling**
|
38 |
+
DWI and ADC sequences in prostate158 were resampled to the orientation and voxel spacing of the T2w sequence. As the zonal segmentation uses T2w images, no additional resampling is nessecary. However, the training script will perform additonal resampling automatically.
|
39 |
+
|
40 |
+
|
41 |
+
## **Performance**
|
42 |
+
The model achives the following performance on the prostate158 test dataset:
|
43 |
+
|
44 |
+
<table border=1 frame=void rules=rows>
|
45 |
+
<thead>
|
46 |
+
<tr>
|
47 |
+
<td></td>
|
48 |
+
<td colspan = 3><b><center>Rater 1</center></b></td>
|
49 |
+
<td> </td>
|
50 |
+
<td colspan = 3><b><center>Rater 2</center></b></td>
|
51 |
+
</tr>
|
52 |
+
<tr>
|
53 |
+
<th>Metric</th>
|
54 |
+
<th>Transitional Zone</th>
|
55 |
+
<th>Peripheral Zone</th>
|
56 |
+
<th> </th>
|
57 |
+
<th>Transitional Zone</th>
|
58 |
+
<th>Peripheral Zone</th>
|
59 |
+
</tr>
|
60 |
+
</thead>
|
61 |
+
<tbody>
|
62 |
+
<tr>
|
63 |
+
<td><a href='https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient'>Dice Coefficient </a></td>
|
64 |
+
<td> 0.877</td>
|
65 |
+
<td> 0.754</td>
|
66 |
+
<td> </td>
|
67 |
+
<td> 0.875</td>
|
68 |
+
<td> 0.730</td>
|
69 |
+
</tr>
|
70 |
+
<tr>
|
71 |
+
<td><a href='https://en.wikipedia.org/wiki/Hausdorff_distance'>Hausdorff Distance </a></td>
|
72 |
+
<td> 18.3</td>
|
73 |
+
<td> 22.8</td>
|
74 |
+
<td> </td>
|
75 |
+
<td> 17.5</td>
|
76 |
+
<td> 33.2</td>
|
77 |
+
</tr>
|
78 |
+
<tr>
|
79 |
+
<td><a href='https://github.com/deepmind/surface-distance'>Surface Distance </a></td>
|
80 |
+
<td> 2.19</td>
|
81 |
+
<td> 1.95</td>
|
82 |
+
<td> </td>
|
83 |
+
<td> 2.59</td>
|
84 |
+
<td> 1.88</td>
|
85 |
+
</tr>
|
86 |
+
</tbody>
|
87 |
+
</table>
|
88 |
+
|
89 |
+
For more details, please see the original [publication](https://doi.org/10.1016/j.compbiomed.2022.105817) or official [GitHub repository](https://github.com/kbressem/prostate158)
|
90 |
+
|
91 |
+
|
92 |
+
## **System Configuration**
|
93 |
+
The model was trained for 100 epochs on a workstaion with a single Nvidia RTX 3080 GPU. This takes approximatly 8 hours.
|
94 |
+
|
95 |
+
## **Limitations** (Optional)
|
96 |
+
|
97 |
+
This training and inference pipeline was developed for research purposes only. This research use only software that has not been cleared or approved by FDA or any regulatory agency. The model is for research/developmental purposes only and cannot be used directly for clinical procedures.
|
98 |
+
|
99 |
+
## **Citation Info** (Optional)
|
100 |
+
|
101 |
+
```
|
102 |
+
@article{ADAMS2022105817,
|
103 |
+
title = {Prostate158 - An expert-annotated 3T MRI dataset and algorithm for prostate cancer detection},
|
104 |
+
journal = {Computers in Biology and Medicine},
|
105 |
+
volume = {148},
|
106 |
+
pages = {105817},
|
107 |
+
year = {2022},
|
108 |
+
issn = {0010-4825},
|
109 |
+
doi = {https://doi.org/10.1016/j.compbiomed.2022.105817},
|
110 |
+
url = {https://www.sciencedirect.com/science/article/pii/S0010482522005789},
|
111 |
+
author = {Lisa C. Adams and Marcus R. Makowski and Günther Engel and Maximilian Rattunde and Felix Busch and Patrick Asbach and Stefan M. Niehues and Shankeeth Vinayahalingam and Bram {van Ginneken} and Geert Litjens and Keno K. Bressem},
|
112 |
+
keywords = {Prostate cancer, Deep learning, Machine learning, Artificial intelligence, Magnetic resonance imaging, Biparametric prostate MRI}
|
113 |
+
}
|
114 |
+
```
|
115 |
+
|
116 |
+
## **References**
|
117 |
+
|
118 |
+
[1] Sakinis, Tomas, et al. "Interactive segmentation of medical images through fully convolutional neural networks." arXiv preprint arXiv:1903.08205 (2019).
|
configs/inference.json
ADDED
@@ -0,0 +1,153 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"imports": [
|
3 |
+
"$import pandas as pd",
|
4 |
+
"$import os"
|
5 |
+
],
|
6 |
+
"bundle_root": "/workspace/data/prostate_mri_anatomy",
|
7 |
+
"output_dir": "$@bundle_root + '/eval'",
|
8 |
+
"dataset_dir": "/workspace/data/prostate158/prostate158_train/",
|
9 |
+
"datalist": "$list(@dataset_dir + pd.read_csv(@dataset_dir + 'valid.csv').t2)",
|
10 |
+
"device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
|
11 |
+
"network_def": {
|
12 |
+
"_target_": "UNet",
|
13 |
+
"spatial_dims": 3,
|
14 |
+
"in_channels": 1,
|
15 |
+
"out_channels": 3,
|
16 |
+
"channels": [
|
17 |
+
16,
|
18 |
+
32,
|
19 |
+
64,
|
20 |
+
128,
|
21 |
+
256,
|
22 |
+
512
|
23 |
+
],
|
24 |
+
"strides": [
|
25 |
+
2,
|
26 |
+
2,
|
27 |
+
2,
|
28 |
+
2,
|
29 |
+
2
|
30 |
+
],
|
31 |
+
"num_res_units": 4,
|
32 |
+
"norm": "batch",
|
33 |
+
"act": "prelu",
|
34 |
+
"dropout": 0.15
|
35 |
+
},
|
36 |
+
"network": "$@network_def.to(@device)",
|
37 |
+
"preprocessing": {
|
38 |
+
"_target_": "Compose",
|
39 |
+
"transforms": [
|
40 |
+
{
|
41 |
+
"_target_": "LoadImaged",
|
42 |
+
"keys": "image"
|
43 |
+
},
|
44 |
+
{
|
45 |
+
"_target_": "EnsureChannelFirstd",
|
46 |
+
"keys": "image"
|
47 |
+
},
|
48 |
+
{
|
49 |
+
"_target_": "Orientationd",
|
50 |
+
"keys": "image",
|
51 |
+
"axcodes": "RAS"
|
52 |
+
},
|
53 |
+
{
|
54 |
+
"_target_": "Spacingd",
|
55 |
+
"keys": "image",
|
56 |
+
"pixdim": [
|
57 |
+
0.5,
|
58 |
+
0.5,
|
59 |
+
0.5
|
60 |
+
],
|
61 |
+
"mode": "bilinear"
|
62 |
+
},
|
63 |
+
{
|
64 |
+
"_target_": "ScaleIntensityd",
|
65 |
+
"keys": "image",
|
66 |
+
"minv": 0,
|
67 |
+
"maxv": 1
|
68 |
+
},
|
69 |
+
{
|
70 |
+
"_target_": "NormalizeIntensityd",
|
71 |
+
"keys": "image"
|
72 |
+
},
|
73 |
+
{
|
74 |
+
"_target_": "EnsureTyped",
|
75 |
+
"keys": "image"
|
76 |
+
}
|
77 |
+
]
|
78 |
+
},
|
79 |
+
"dataset": {
|
80 |
+
"_target_": "Dataset",
|
81 |
+
"data": "$[{'image': i} for i in @datalist]",
|
82 |
+
"transform": "@preprocessing"
|
83 |
+
},
|
84 |
+
"dataloader": {
|
85 |
+
"_target_": "DataLoader",
|
86 |
+
"dataset": "@dataset",
|
87 |
+
"batch_size": 1,
|
88 |
+
"shuffle": false,
|
89 |
+
"num_workers": 4
|
90 |
+
},
|
91 |
+
"inferer": {
|
92 |
+
"_target_": "SlidingWindowInferer",
|
93 |
+
"roi_size": [
|
94 |
+
96,
|
95 |
+
96,
|
96 |
+
96
|
97 |
+
],
|
98 |
+
"sw_batch_size": 4,
|
99 |
+
"overlap": 0.5
|
100 |
+
},
|
101 |
+
"postprocessing": {
|
102 |
+
"_target_": "Compose",
|
103 |
+
"transforms": [
|
104 |
+
{
|
105 |
+
"_target_": "AsDiscreted",
|
106 |
+
"keys": "pred",
|
107 |
+
"argmax": true
|
108 |
+
},
|
109 |
+
{
|
110 |
+
"_target_": "KeepLargestConnectedComponentd",
|
111 |
+
"keys": "pred",
|
112 |
+
"applied_labels": [
|
113 |
+
1,
|
114 |
+
2
|
115 |
+
]
|
116 |
+
},
|
117 |
+
{
|
118 |
+
"_target_": "SaveImaged",
|
119 |
+
"keys": "pred",
|
120 |
+
"resample": false,
|
121 |
+
"meta_keys": "pred_meta_dict",
|
122 |
+
"output_dir": "@output_dir"
|
123 |
+
}
|
124 |
+
]
|
125 |
+
},
|
126 |
+
"handlers": [
|
127 |
+
{
|
128 |
+
"_target_": "CheckpointLoader",
|
129 |
+
"load_path": "$@bundle_root + '/models/model.pt'",
|
130 |
+
"load_dict": {
|
131 |
+
"model": "@network"
|
132 |
+
}
|
133 |
+
},
|
134 |
+
{
|
135 |
+
"_target_": "StatsHandler",
|
136 |
+
"iteration_log": false
|
137 |
+
}
|
138 |
+
],
|
139 |
+
"evaluator": {
|
140 |
+
"_target_": "SupervisedEvaluator",
|
141 |
+
"device": "@device",
|
142 |
+
"val_data_loader": "@dataloader",
|
143 |
+
"network": "@network",
|
144 |
+
"inferer": "@inferer",
|
145 |
+
"postprocessing": "@postprocessing",
|
146 |
+
"val_handlers": "@handlers",
|
147 |
+
"amp": true
|
148 |
+
},
|
149 |
+
"evaluating": [
|
150 |
+
"$setattr(torch.backends.cudnn, 'benchmark', True)",
|
151 |
+
"$@evaluator.run()"
|
152 |
+
]
|
153 |
+
}
|
configs/logging.conf
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[loggers]
|
2 |
+
keys=root
|
3 |
+
|
4 |
+
[handlers]
|
5 |
+
keys=consoleHandler
|
6 |
+
|
7 |
+
[formatters]
|
8 |
+
keys=fullFormatter
|
9 |
+
|
10 |
+
[logger_root]
|
11 |
+
level=INFO
|
12 |
+
handlers=consoleHandler
|
13 |
+
|
14 |
+
[handler_consoleHandler]
|
15 |
+
class=StreamHandler
|
16 |
+
level=INFO
|
17 |
+
formatter=fullFormatter
|
18 |
+
args=(sys.stdout,)
|
19 |
+
|
20 |
+
[formatter_fullFormatter]
|
21 |
+
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
|
configs/metadata.json
ADDED
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
|
3 |
+
"version": "0.1.0",
|
4 |
+
"changelog": {
|
5 |
+
"0.1.0": "complete the model package"
|
6 |
+
},
|
7 |
+
"monai_version": "0.9.1",
|
8 |
+
"pytorch_version": "1.11.0",
|
9 |
+
"numpy_version": "1.22.3",
|
10 |
+
"optional_packages_version": {
|
11 |
+
"nibabel": "3.2.2",
|
12 |
+
"itk": "5.2.1",
|
13 |
+
"pytorch-ignite": "0.4.9",
|
14 |
+
"pandas": "1.4.2"
|
15 |
+
},
|
16 |
+
"task": "Segmentation of peripheral zone and central gland in prostate MRI",
|
17 |
+
"description": "A pre-trained model for volumetric (3D) segmentation of the prostate from MRI images",
|
18 |
+
"authors": "Keno Bressem",
|
19 |
+
"copyright": "Copyright (c) Keno Bressem",
|
20 |
+
"data_source": "Prostate158 from 10.5281/zenodo.6481141",
|
21 |
+
"data_type": "nifti",
|
22 |
+
"image_classes": "single channel data, intensity scaled to [0, 1]",
|
23 |
+
"label_classes": "singe channel data, 1 central gland, 2 periheral zone, 0 is everything else",
|
24 |
+
"pred_classes": "3 channels OneHot data, channel 1 central gland, channel 2 is peripheral zone, channel 0 is background",
|
25 |
+
"eval_metrics": {
|
26 |
+
"mean_dice": {
|
27 |
+
"central gland": 0.88,
|
28 |
+
"peripheral zone": 0.75
|
29 |
+
}
|
30 |
+
},
|
31 |
+
"intended_use": "This is an example, not to be used for diagnostic purposes",
|
32 |
+
"references": [
|
33 |
+
"Adams, L. C., Makowski, M. R., Engel, G., Rattunde, M., Busch, F., Asbach, P., ... & Bressem, K. K. (2022). Prostate158-An expert-annotated 3T MRI dataset and algorithm for prostate cancer detection. Computers in Biology and Medicine, 148, 105817."
|
34 |
+
],
|
35 |
+
"network_data_format": {
|
36 |
+
"inputs": {
|
37 |
+
"image": {
|
38 |
+
"type": "image",
|
39 |
+
"format": "magnitude",
|
40 |
+
"modality": "MR",
|
41 |
+
"num_channels": 1,
|
42 |
+
"spatial_shape": [
|
43 |
+
96,
|
44 |
+
96,
|
45 |
+
96
|
46 |
+
],
|
47 |
+
"dtype": "float32",
|
48 |
+
"value_range": [
|
49 |
+
0,
|
50 |
+
1
|
51 |
+
],
|
52 |
+
"is_patch_data": true,
|
53 |
+
"channel_def": {
|
54 |
+
"0": "image"
|
55 |
+
}
|
56 |
+
}
|
57 |
+
},
|
58 |
+
"outputs": {
|
59 |
+
"pred": {
|
60 |
+
"type": "image",
|
61 |
+
"format": "labels",
|
62 |
+
"num_channels": 3,
|
63 |
+
"spatial_shape": [
|
64 |
+
96,
|
65 |
+
96,
|
66 |
+
96
|
67 |
+
],
|
68 |
+
"dtype": "float32",
|
69 |
+
"value_range": [],
|
70 |
+
"is_patch_data": true,
|
71 |
+
"channel_def": {
|
72 |
+
"0": "background",
|
73 |
+
"1": "central gland",
|
74 |
+
"2": "peripheral zone"
|
75 |
+
}
|
76 |
+
}
|
77 |
+
}
|
78 |
+
}
|
79 |
+
}
|
configs/train.json
ADDED
@@ -0,0 +1,373 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"imports": [
|
3 |
+
"$import pandas as pd",
|
4 |
+
"$import os",
|
5 |
+
"$import ignite"
|
6 |
+
],
|
7 |
+
"bundle_root": "/wokspace/model-zoo/models/prostate_mri_anatomy",
|
8 |
+
"ckpt_dir": "$@bundle_root + '/models'",
|
9 |
+
"output_dir": "$@bundle_root + '/eval'",
|
10 |
+
"dataset_dir": "/workspace/data/prostate158/prostate158_train/",
|
11 |
+
"images": "$list(@dataset_dir + pd.read_csv(@dataset_dir + 'train.csv').t2)",
|
12 |
+
"labels": "$list(@dataset_dir + pd.read_csv(@dataset_dir + 'train.csv').t2_anatomy_reader1)",
|
13 |
+
"val_images": "$list(@dataset_dir + pd.read_csv(@dataset_dir + 'valid.csv').t2)",
|
14 |
+
"val_labels": "$list(@dataset_dir + pd.read_csv(@dataset_dir + 'valid.csv').t2_anatomy_reader1)",
|
15 |
+
"device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
|
16 |
+
"network_def": {
|
17 |
+
"_target_": "UNet",
|
18 |
+
"spatial_dims": 3,
|
19 |
+
"in_channels": 1,
|
20 |
+
"out_channels": 3,
|
21 |
+
"channels": [
|
22 |
+
16,
|
23 |
+
32,
|
24 |
+
64,
|
25 |
+
128,
|
26 |
+
256,
|
27 |
+
512
|
28 |
+
],
|
29 |
+
"strides": [
|
30 |
+
2,
|
31 |
+
2,
|
32 |
+
2,
|
33 |
+
2,
|
34 |
+
2
|
35 |
+
],
|
36 |
+
"num_res_units": 4,
|
37 |
+
"norm": "batch",
|
38 |
+
"act": "prelu",
|
39 |
+
"dropout": 0.15
|
40 |
+
},
|
41 |
+
"network": "$@network_def.to(@device)",
|
42 |
+
"loss": {
|
43 |
+
"_target_": "DiceFocalLoss",
|
44 |
+
"to_onehot_y": true,
|
45 |
+
"softmax": true,
|
46 |
+
"include_background": false
|
47 |
+
},
|
48 |
+
"optimizer": {
|
49 |
+
"_target_": "Novograd",
|
50 |
+
"params": "$@network.parameters()",
|
51 |
+
"lr": 0.001,
|
52 |
+
"amsgrad": true,
|
53 |
+
"weight_decay": 0.01
|
54 |
+
},
|
55 |
+
"train": {
|
56 |
+
"deterministic_transforms": [
|
57 |
+
{
|
58 |
+
"_target_": "LoadImaged",
|
59 |
+
"keys": [
|
60 |
+
"image",
|
61 |
+
"label"
|
62 |
+
]
|
63 |
+
},
|
64 |
+
{
|
65 |
+
"_target_": "EnsureChannelFirstd",
|
66 |
+
"keys": [
|
67 |
+
"image",
|
68 |
+
"label"
|
69 |
+
]
|
70 |
+
},
|
71 |
+
{
|
72 |
+
"_target_": "Orientationd",
|
73 |
+
"keys": [
|
74 |
+
"image",
|
75 |
+
"label"
|
76 |
+
],
|
77 |
+
"axcodes": "RAS"
|
78 |
+
},
|
79 |
+
{
|
80 |
+
"_target_": "Spacingd",
|
81 |
+
"keys": [
|
82 |
+
"image",
|
83 |
+
"label"
|
84 |
+
],
|
85 |
+
"pixdim": [
|
86 |
+
0.5,
|
87 |
+
0.5,
|
88 |
+
0.5
|
89 |
+
],
|
90 |
+
"mode": [
|
91 |
+
"bilinear",
|
92 |
+
"nearest"
|
93 |
+
]
|
94 |
+
},
|
95 |
+
{
|
96 |
+
"_target_": "ScaleIntensityd",
|
97 |
+
"keys": "image",
|
98 |
+
"minv": 0,
|
99 |
+
"maxv": 1
|
100 |
+
},
|
101 |
+
{
|
102 |
+
"_target_": "NormalizeIntensityd",
|
103 |
+
"keys": "image"
|
104 |
+
},
|
105 |
+
{
|
106 |
+
"_target_": "EnsureTyped",
|
107 |
+
"keys": [
|
108 |
+
"image",
|
109 |
+
"label"
|
110 |
+
]
|
111 |
+
}
|
112 |
+
],
|
113 |
+
"random_transforms": [
|
114 |
+
{
|
115 |
+
"_target_": "RandAdjustContrastd",
|
116 |
+
"keys": "image",
|
117 |
+
"prob": 0.15,
|
118 |
+
"gamma": 2.0
|
119 |
+
},
|
120 |
+
{
|
121 |
+
"_target_": "RandGaussianNoised",
|
122 |
+
"keys": "image",
|
123 |
+
"prob": 0.15,
|
124 |
+
"mean": 0.1,
|
125 |
+
"std": 0.25
|
126 |
+
},
|
127 |
+
{
|
128 |
+
"_target_": "RandAffined",
|
129 |
+
"keys": [
|
130 |
+
"image",
|
131 |
+
"label"
|
132 |
+
],
|
133 |
+
"prob": 0.15,
|
134 |
+
"rotate_range": 5,
|
135 |
+
"shear_range": 0.5,
|
136 |
+
"translate_range": 25
|
137 |
+
},
|
138 |
+
{
|
139 |
+
"_target_": "RandBiasFieldd",
|
140 |
+
"keys": "image",
|
141 |
+
"prob": 0.15,
|
142 |
+
"coeff_range": [
|
143 |
+
0.0,
|
144 |
+
0.01
|
145 |
+
],
|
146 |
+
"degree": 10
|
147 |
+
},
|
148 |
+
{
|
149 |
+
"_target_": "Rand3DElasticd",
|
150 |
+
"keys": [
|
151 |
+
"image",
|
152 |
+
"label"
|
153 |
+
],
|
154 |
+
"prob": 0.15,
|
155 |
+
"magnitude_range": [
|
156 |
+
0.5,
|
157 |
+
1.5
|
158 |
+
],
|
159 |
+
"rotate_range": 5,
|
160 |
+
"shear_range": 0.5,
|
161 |
+
"sigma_range": [
|
162 |
+
0.5,
|
163 |
+
1.5
|
164 |
+
],
|
165 |
+
"translate_range": 25
|
166 |
+
},
|
167 |
+
{
|
168 |
+
"_target_": "RandZoomd",
|
169 |
+
"keys": [
|
170 |
+
"image",
|
171 |
+
"label"
|
172 |
+
],
|
173 |
+
"prob": 0.15,
|
174 |
+
"max": 1.1,
|
175 |
+
"min": 0.9
|
176 |
+
},
|
177 |
+
{
|
178 |
+
"_target_": "RandCropByPosNegLabeld",
|
179 |
+
"keys": [
|
180 |
+
"image",
|
181 |
+
"label"
|
182 |
+
],
|
183 |
+
"label_key": "label",
|
184 |
+
"spatial_size": [
|
185 |
+
96,
|
186 |
+
96,
|
187 |
+
96
|
188 |
+
],
|
189 |
+
"pos": 1,
|
190 |
+
"neg": 1,
|
191 |
+
"num_samples": 4,
|
192 |
+
"image_key": "image",
|
193 |
+
"image_threshold": 0
|
194 |
+
},
|
195 |
+
{
|
196 |
+
"_target_": "RandShiftIntensityd",
|
197 |
+
"keys": "image",
|
198 |
+
"prob": 0.15,
|
199 |
+
"offsets": 0.2
|
200 |
+
}
|
201 |
+
],
|
202 |
+
"preprocessing": {
|
203 |
+
"_target_": "Compose",
|
204 |
+
"transforms": "$@train#deterministic_transforms + @train#random_transforms"
|
205 |
+
},
|
206 |
+
"dataset": {
|
207 |
+
"_target_": "PersistentDataset",
|
208 |
+
"data": "$[{'image': i, 'label': l} for i, l in zip(@images, @labels)]",
|
209 |
+
"transform": "@train#preprocessing",
|
210 |
+
"cache_dir": "$@bundle_root + '/cache'"
|
211 |
+
},
|
212 |
+
"dataloader": {
|
213 |
+
"_target_": "DataLoader",
|
214 |
+
"dataset": "@train#dataset",
|
215 |
+
"batch_size": 2,
|
216 |
+
"shuffle": true,
|
217 |
+
"num_workers": 4
|
218 |
+
},
|
219 |
+
"inferer": {
|
220 |
+
"_target_": "SimpleInferer"
|
221 |
+
},
|
222 |
+
"postprocessing": {
|
223 |
+
"_target_": "Compose",
|
224 |
+
"transforms": [
|
225 |
+
{
|
226 |
+
"_target_": "Activationsd",
|
227 |
+
"keys": "pred",
|
228 |
+
"softmax": true
|
229 |
+
},
|
230 |
+
{
|
231 |
+
"_target_": "AsDiscreted",
|
232 |
+
"keys": [
|
233 |
+
"pred",
|
234 |
+
"label"
|
235 |
+
],
|
236 |
+
"argmax": [
|
237 |
+
true,
|
238 |
+
false
|
239 |
+
],
|
240 |
+
"to_onehot": 3
|
241 |
+
}
|
242 |
+
]
|
243 |
+
},
|
244 |
+
"handlers": [
|
245 |
+
{
|
246 |
+
"_target_": "ValidationHandler",
|
247 |
+
"validator": "@validate#evaluator",
|
248 |
+
"epoch_level": true,
|
249 |
+
"interval": 5
|
250 |
+
},
|
251 |
+
{
|
252 |
+
"_target_": "StatsHandler",
|
253 |
+
"tag_name": "train_loss",
|
254 |
+
"output_transform": "$monai.handlers.from_engine(['loss'], first=True)"
|
255 |
+
},
|
256 |
+
{
|
257 |
+
"_target_": "TensorBoardStatsHandler",
|
258 |
+
"log_dir": "@output_dir",
|
259 |
+
"tag_name": "train_loss",
|
260 |
+
"output_transform": "$monai.handlers.from_engine(['loss'], first=True)"
|
261 |
+
}
|
262 |
+
],
|
263 |
+
"key_metric": {
|
264 |
+
"train_dice": {
|
265 |
+
"_target_": "MeanDice",
|
266 |
+
"include_background": false,
|
267 |
+
"output_transform": "$monai.handlers.from_engine(['pred', 'label'])"
|
268 |
+
}
|
269 |
+
},
|
270 |
+
"trainer": {
|
271 |
+
"_target_": "SupervisedTrainer",
|
272 |
+
"max_epochs": 100,
|
273 |
+
"device": "@device",
|
274 |
+
"train_data_loader": "@train#dataloader",
|
275 |
+
"network": "@network",
|
276 |
+
"loss_function": "@loss",
|
277 |
+
"optimizer": "@optimizer",
|
278 |
+
"inferer": "@train#inferer",
|
279 |
+
"postprocessing": "@train#postprocessing",
|
280 |
+
"key_train_metric": "@train#key_metric",
|
281 |
+
"train_handlers": "@train#handlers",
|
282 |
+
"amp": true
|
283 |
+
}
|
284 |
+
},
|
285 |
+
"validate": {
|
286 |
+
"preprocessing": {
|
287 |
+
"_target_": "Compose",
|
288 |
+
"transforms": "%train#deterministic_transforms"
|
289 |
+
},
|
290 |
+
"dataset": {
|
291 |
+
"_target_": "PersistentDataset",
|
292 |
+
"data": "$[{'image': i, 'label': l} for i, l in zip(@val_images, @val_labels)]",
|
293 |
+
"transform": "@validate#preprocessing",
|
294 |
+
"cache_dir": "$@bundle_root + '/cache'"
|
295 |
+
},
|
296 |
+
"dataloader": {
|
297 |
+
"_target_": "DataLoader",
|
298 |
+
"dataset": "@validate#dataset",
|
299 |
+
"batch_size": 1,
|
300 |
+
"shuffle": false,
|
301 |
+
"num_workers": 4
|
302 |
+
},
|
303 |
+
"inferer": {
|
304 |
+
"_target_": "SlidingWindowInferer",
|
305 |
+
"roi_size": [
|
306 |
+
96,
|
307 |
+
96,
|
308 |
+
96
|
309 |
+
],
|
310 |
+
"sw_batch_size": 16,
|
311 |
+
"overlap": 0.5
|
312 |
+
},
|
313 |
+
"postprocessing": "%train#postprocessing",
|
314 |
+
"handlers": [
|
315 |
+
{
|
316 |
+
"_target_": "StatsHandler",
|
317 |
+
"iteration_log": false
|
318 |
+
},
|
319 |
+
{
|
320 |
+
"_target_": "TensorBoardStatsHandler",
|
321 |
+
"log_dir": "@output_dir",
|
322 |
+
"iteration_log": false
|
323 |
+
},
|
324 |
+
{
|
325 |
+
"_target_": "CheckpointSaver",
|
326 |
+
"save_dir": "@ckpt_dir",
|
327 |
+
"save_dict": {
|
328 |
+
"model": "@network"
|
329 |
+
},
|
330 |
+
"save_key_metric": true,
|
331 |
+
"key_metric_filename": "model.pt"
|
332 |
+
}
|
333 |
+
],
|
334 |
+
"key_metric": {
|
335 |
+
"val_mean_dice": {
|
336 |
+
"_target_": "MeanDice",
|
337 |
+
"include_background": false,
|
338 |
+
"output_transform": "$monai.handlers.from_engine(['pred', 'label'])"
|
339 |
+
}
|
340 |
+
},
|
341 |
+
"additional_metrics": {
|
342 |
+
"val_hausdorff_distance": {
|
343 |
+
"_target_": "HausdorffDistance",
|
344 |
+
"include_background": false,
|
345 |
+
"reduction": "mean",
|
346 |
+
"output_transform": "$monai.handlers.from_engine(['pred', 'label'])"
|
347 |
+
},
|
348 |
+
"val_surface_distance": {
|
349 |
+
"_target_": "SurfaceDistance",
|
350 |
+
"include_background": false,
|
351 |
+
"reduction": "mean",
|
352 |
+
"output_transform": "$monai.handlers.from_engine(['pred', 'label'])"
|
353 |
+
}
|
354 |
+
},
|
355 |
+
"evaluator": {
|
356 |
+
"_target_": "SupervisedEvaluator",
|
357 |
+
"device": "@device",
|
358 |
+
"val_data_loader": "@validate#dataloader",
|
359 |
+
"network": "@network",
|
360 |
+
"inferer": "@validate#inferer",
|
361 |
+
"postprocessing": "@validate#postprocessing",
|
362 |
+
"key_val_metric": "@validate#key_metric",
|
363 |
+
"additional_metrics": "@validate#additional_metrics",
|
364 |
+
"val_handlers": "@validate#handlers",
|
365 |
+
"amp": true
|
366 |
+
}
|
367 |
+
},
|
368 |
+
"training": [
|
369 |
+
"$monai.utils.set_determinism(seed=42)",
|
370 |
+
"$setattr(torch.backends.cudnn, 'benchmark', True)",
|
371 |
+
"$@train#trainer.run()"
|
372 |
+
]
|
373 |
+
}
|
docs/README.md
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Prostate MRI zonal segmentation
|
2 |
+
|
3 |
+
### **Authors**
|
4 |
+
|
5 |
+
Lisa C. Adams, Keno K. Bressem
|
6 |
+
|
7 |
+
### **Tags**
|
8 |
+
|
9 |
+
Segmentation, MR, Prostate
|
10 |
+
|
11 |
+
## **Model Description**
|
12 |
+
This model was trained with the UNet architecture [1] and is used for 3D volumetric segmentation of the anatomical prostate zones on T2w MRI images. The segmentation of the anatomical regions is formulated as a voxel-wise classification. Each voxel is classified as either central gland (1), peripheral zone (2), or background (0). The model is optimized using a gradient descent method that minimizes the focal soft-dice loss between the predicted mask and the actual segmentation.
|
13 |
+
|
14 |
+
## **Data**
|
15 |
+
The model was trained in the prostate158 training data, which is available at https://doi.org/10.5281/zenodo.6481141. Only T2w images were used for this task.
|
16 |
+
|
17 |
+
|
18 |
+
### **Preprocessing**
|
19 |
+
MRI images in the prostate158 dataset were preprocessed, including center cropping and resampling. When applying the model to new data, this preprocessing should be repeated.
|
20 |
+
|
21 |
+
#### **Center cropping**
|
22 |
+
T2w images were acquired with a voxel spacing of 0.47 x 0.47 x 3 mm and an axial FOV size of 180 x 180 mm. However, the prostate rarely exceeds an axial diameter of 100 mm, and for zonal segmentation, the tissue surrounding the prostate is not of interest and only increases the image size and thus the computational cost. Center-cropping can reduce the image size without sacrificing information.
|
23 |
+
|
24 |
+
The script `center_crop.py` allows to reproduce center-cropping as performed in the prostate158 paper.
|
25 |
+
|
26 |
+
```bash
|
27 |
+
python scripts/center_crop.py --file_name path/to/t2_image --out_name cropped_t2
|
28 |
+
```
|
29 |
+
|
30 |
+
#### **Resampling**
|
31 |
+
DWI and ADC sequences in prostate158 were resampled to the orientation and voxel spacing of the T2w sequence. As the zonal segmentation uses T2w images, no additional resampling is nessecary. However, the training script will perform additonal resampling automatically.
|
32 |
+
|
33 |
+
|
34 |
+
## **Performance**
|
35 |
+
The model achives the following performance on the prostate158 test dataset:
|
36 |
+
|
37 |
+
<table border=1 frame=void rules=rows>
|
38 |
+
<thead>
|
39 |
+
<tr>
|
40 |
+
<td></td>
|
41 |
+
<td colspan = 3><b><center>Rater 1</center></b></td>
|
42 |
+
<td> </td>
|
43 |
+
<td colspan = 3><b><center>Rater 2</center></b></td>
|
44 |
+
</tr>
|
45 |
+
<tr>
|
46 |
+
<th>Metric</th>
|
47 |
+
<th>Transitional Zone</th>
|
48 |
+
<th>Peripheral Zone</th>
|
49 |
+
<th> </th>
|
50 |
+
<th>Transitional Zone</th>
|
51 |
+
<th>Peripheral Zone</th>
|
52 |
+
</tr>
|
53 |
+
</thead>
|
54 |
+
<tbody>
|
55 |
+
<tr>
|
56 |
+
<td><a href='https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient'>Dice Coefficient </a></td>
|
57 |
+
<td> 0.877</td>
|
58 |
+
<td> 0.754</td>
|
59 |
+
<td> </td>
|
60 |
+
<td> 0.875</td>
|
61 |
+
<td> 0.730</td>
|
62 |
+
</tr>
|
63 |
+
<tr>
|
64 |
+
<td><a href='https://en.wikipedia.org/wiki/Hausdorff_distance'>Hausdorff Distance </a></td>
|
65 |
+
<td> 18.3</td>
|
66 |
+
<td> 22.8</td>
|
67 |
+
<td> </td>
|
68 |
+
<td> 17.5</td>
|
69 |
+
<td> 33.2</td>
|
70 |
+
</tr>
|
71 |
+
<tr>
|
72 |
+
<td><a href='https://github.com/deepmind/surface-distance'>Surface Distance </a></td>
|
73 |
+
<td> 2.19</td>
|
74 |
+
<td> 1.95</td>
|
75 |
+
<td> </td>
|
76 |
+
<td> 2.59</td>
|
77 |
+
<td> 1.88</td>
|
78 |
+
</tr>
|
79 |
+
</tbody>
|
80 |
+
</table>
|
81 |
+
|
82 |
+
For more details, please see the original [publication](https://doi.org/10.1016/j.compbiomed.2022.105817) or official [GitHub repository](https://github.com/kbressem/prostate158)
|
83 |
+
|
84 |
+
|
85 |
+
## **System Configuration**
|
86 |
+
The model was trained for 100 epochs on a workstaion with a single Nvidia RTX 3080 GPU. This takes approximatly 8 hours.
|
87 |
+
|
88 |
+
## **Limitations** (Optional)
|
89 |
+
|
90 |
+
This training and inference pipeline was developed for research purposes only. This research use only software that has not been cleared or approved by FDA or any regulatory agency. The model is for research/developmental purposes only and cannot be used directly for clinical procedures.
|
91 |
+
|
92 |
+
## **Citation Info** (Optional)
|
93 |
+
|
94 |
+
```
|
95 |
+
@article{ADAMS2022105817,
|
96 |
+
title = {Prostate158 - An expert-annotated 3T MRI dataset and algorithm for prostate cancer detection},
|
97 |
+
journal = {Computers in Biology and Medicine},
|
98 |
+
volume = {148},
|
99 |
+
pages = {105817},
|
100 |
+
year = {2022},
|
101 |
+
issn = {0010-4825},
|
102 |
+
doi = {https://doi.org/10.1016/j.compbiomed.2022.105817},
|
103 |
+
url = {https://www.sciencedirect.com/science/article/pii/S0010482522005789},
|
104 |
+
author = {Lisa C. Adams and Marcus R. Makowski and Günther Engel and Maximilian Rattunde and Felix Busch and Patrick Asbach and Stefan M. Niehues and Shankeeth Vinayahalingam and Bram {van Ginneken} and Geert Litjens and Keno K. Bressem},
|
105 |
+
keywords = {Prostate cancer, Deep learning, Machine learning, Artificial intelligence, Magnetic resonance imaging, Biparametric prostate MRI}
|
106 |
+
}
|
107 |
+
```
|
108 |
+
|
109 |
+
## **References**
|
110 |
+
|
111 |
+
[1] Sakinis, Tomas, et al. "Interactive segmentation of medical images through fully convolutional neural networks." arXiv preprint arXiv:1903.08205 (2019).
|
models/model.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3c18e4c658bb088f551b1d63219ef8340fe6256d016c75fc140d3da49dda696d
|
3 |
+
size 152810901
|
scripts/center_crop.py
ADDED
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Copyright 2020 - 2022 MONAI Consortium
|
2 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
3 |
+
# you may not use this file except in compliance with the License.
|
4 |
+
# You may obtain a copy of the License at
|
5 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
6 |
+
# Unless required by applicable law or agreed to in writing, software
|
7 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
8 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
9 |
+
# See the License for the specific language governing permissions and
|
10 |
+
# limitations under the License.
|
11 |
+
|
12 |
+
|
13 |
+
import argparse
|
14 |
+
from typing import Union
|
15 |
+
|
16 |
+
import SimpleITK as sitk # noqa N813
|
17 |
+
|
18 |
+
parser = argparse.ArgumentParser(description="Center crop a 3d volume")
|
19 |
+
parser.add_argument("--file_name", type=str, required=True, help="Path to the input file to center crop.")
|
20 |
+
parser.add_argument(
|
21 |
+
"--margin",
|
22 |
+
type=Union[int, float],
|
23 |
+
required=False,
|
24 |
+
default=0.2,
|
25 |
+
help="Crop margins applied to EACH side in the axial plane. "
|
26 |
+
"If given a float value, will perform percentage crop. "
|
27 |
+
"If given an int value, will perform absolute crop.",
|
28 |
+
)
|
29 |
+
parser.add_argument("--out_name", type=str, required=True, help="Path and filename for the cropped volume")
|
30 |
+
|
31 |
+
args = parser.parse_args()
|
32 |
+
|
33 |
+
|
34 |
+
def _flatten(t):
|
35 |
+
return [item for sublist in t for item in sublist]
|
36 |
+
|
37 |
+
|
38 |
+
def crop(image: sitk.Image, margin: Union[int, float], interpolator=sitk.sitkLinear):
|
39 |
+
"""
|
40 |
+
Crops a sitk.Image while retaining correct spacing. Negative margins will lead to zero padding
|
41 |
+
|
42 |
+
Args:
|
43 |
+
image: a sitk.Image
|
44 |
+
margin: margins to crop. Single integer or float (percentage crop),
|
45 |
+
lists of int/float or nestes lists are supported.
|
46 |
+
"""
|
47 |
+
if isinstance(margin, (list, tuple)):
|
48 |
+
assert len(margin) == 3, "expected margin to be of length 3"
|
49 |
+
else:
|
50 |
+
assert isinstance(margin, (int, float)), "expected margin to be a float value"
|
51 |
+
margin = [margin, margin, margin]
|
52 |
+
|
53 |
+
margin = [m if isinstance(m, (tuple, list)) else [m, m] for m in margin]
|
54 |
+
old_size = image.GetSize()
|
55 |
+
|
56 |
+
# calculate new origin and new image size
|
57 |
+
if all([isinstance(m, float) for m in _flatten(margin)]):
|
58 |
+
assert all([m >= 0 and m < 0.5 for m in _flatten(margin)]), "margins must be between 0 and 0.5"
|
59 |
+
to_crop = [[int(sz * _m) for _m in m] for sz, m in zip(old_size, margin)]
|
60 |
+
elif all([isinstance(m, int) for m in _flatten(margin)]):
|
61 |
+
to_crop = margin
|
62 |
+
else:
|
63 |
+
raise ValueError("Wrong format of margins.")
|
64 |
+
|
65 |
+
new_size = [sz - sum(c) for sz, c in zip(old_size, to_crop)]
|
66 |
+
|
67 |
+
# origin has Index (0,0,0)
|
68 |
+
# new origin has Index (to_crop[0][0], to_crop[2][0], to_crop[2][0])
|
69 |
+
new_origin = image.TransformIndexToPhysicalPoint([c[0] for c in to_crop])
|
70 |
+
|
71 |
+
# create reference plane to resample image
|
72 |
+
ref_image = sitk.Image(new_size, image.GetPixelIDValue())
|
73 |
+
ref_image.SetSpacing(image.GetSpacing())
|
74 |
+
ref_image.SetOrigin(new_origin)
|
75 |
+
ref_image.SetDirection(image.GetDirection())
|
76 |
+
|
77 |
+
return sitk.Resample(image, ref_image, interpolator=interpolator)
|
78 |
+
|
79 |
+
|
80 |
+
if __name__ == "__main__":
|
81 |
+
image = sitk.ReadImage(args.file_name)
|
82 |
+
cropped = crop(image, [args.margin, args.margin, 0.0])
|
83 |
+
sitk.WriteImage(cropped, args.out_name)
|