Thyroid BRAF-RAS Score (BRS) v1 Model Card
This model card describes a model associated with the manuscript "Deep learning prediction of BRAF-RAS gene expression signature identifies noninvasive follicular thyroid neoplasms with papillary-like nuclear features", by Dolezal et al, available here
Model Details
- Developed by: James Dolezal
- Model type: Deep convolutional neural network image classifier
- Language(s): English
- License: GPL-3.0
- Model Description: This is a model that can predict, from H&E-stained pathologic images of thyroid neoplasms, the predicted BRAF-RAS Score (BRS). BRS is a gene expression score scaled from -1 (BRAF-like) to +1 (RAS-like) indicating how similar a tumor's gene expression is to a BRAF-mutant and RAS-mutant tumor. The model is an Xception model with two dropout-enabled hidden layers.
- Image processing: This model expects images of H&E-stained pathology slides at 299 x 299 px and 302 x 302 μm resolution. Images should be stain-normalized using a modified Reinhard normalizer ("Reinhard-Fast") available here. The stain normalizer should be fit using the
target_means
andtarget_stds
listed in the modelparams.json
file. Images should be should be standardized withtf.image.per_image_standardization()
. - Resources for more information: GitHub Repository
Uses
Examples
For direct use, the model can be loaded using Tensorflow/Keras:
import tensorflow as tf
model = tf.keras.models.load_model('/path/')
or loaded with Slideflow version 1.1+ with the following syntax:
import slideflow as sf
model = sf.model.load('/path/')
The stain normalizer can be loaded and fit using Slideflow:
normalizer = sf.util.get_model_normalizer('/path/')
The stain normalizer has a native Tensorflow transform and can be directly applied to a tf.data.Dataset:
# Map the stain normalizer transformation
# to a tf.data.Dataset
dataset = dataset.map(normalizer.tf_to_tf)
Alternatively, the model can be used to generate predictions for whole-slide images processed through Slideflow in an end-to-end Project. To use the model to generate predictions on data processed with Slideflow, simply pass the model to the Project.predict()
function:
import slideflow
P = sf.Project('/path/to/slideflow/project')
P.predict('/model/path')
Direct Use
This model is intended for research purposes only. Possible research areas and tasks include
- Applications in educational settings.
- Research on pathology classification models for thyroid neoplasms.
Excluded uses are described below.
Misuse and Out-of-Scope Use
This model should not be used in a clinical setting to generate predictions that will be used to inform patients, physicians, or any other health care members directly involved in their health care outside the context of an approved research protocol. Using the model in a clinical setting outside the context of an approved research protocol is a misuse of this model. This includes, but is not limited to:
- Generating predictions of images from a patient's tumor and sharing those predictions with the patient
- Generating predictions of images from a patient's tumor and sharing those predictions with the patient's physician, or other members of the patient's healthcare team
- Influencing a patient's health care treatment in any way based on output from this model
Limitations
The model has not been validated in contexts where non-thyroid neoplasms, or rare thyroid subtypes such as anaplastic thyroid carcinoma, are possible.
Bias
This model was trained on The Cancer Genome Atlas (TCGA), which contains patient data from communities and cultures which may not reflect the general population. This datasets is comprised of images from multiple institutions, which may introduce a potential source of bias from site-specific batch effects (Howard, 2021).
Training
Training Data The following dataset was used to train the model:
- The Cancer Genome Atlas (TCGA), THCA cohort (see next section)
This model was trained on a total of 369 slides, with 116 BRAF-like tumors and 271 RAS-like tumors.
Training Procedure Each whole-slide image was sectioned into smaller images in a grid-wise fashion in order to extract tiles from whole-slide images at 302 x 302 μm. Image tiles were extracted at the nearest downsample layer, and resized to 299 x 299 px using Libvips. During training,
- Images are stain-normalized with a modified Reinhard normalizer ("Reinhard-Fast"), which excludes the brightness standardization step, available here
- Images are randomly flipped and rotated (90, 180, 270)
- Images have a 50% chance of being JPEG compressed with quality level between 50-100%
- Images have a 10% chance of random Gaussian blur, with sigma between 0.5-2.0
- Images are standardized with
tf.image.per_image_standardization()
- Images are classified through an Xception block, followed by two hidden layers with dropout (p=0.1) enabled during training
- The loss is mean squared error using the linear outcome BRS
- Training is completed after 1 epoch
Additional training information:
- Hardware: 1 x A100 GPUs
- Optimizer: Adam
- Batch: 128
- Learning rate: 0.0001, with a decay of 0.98 every 512 steps
- Hidden layers: 2 hidden layers of width 1024, with dropout p=0.1
Evaluation Results
External evaluation results are currently under peer review and will be posted once publicly available.
- Downloads last month
- 0