File size: 1,397 Bytes
92d2862 41c6323 ec2ca4d 92d2862 41c6323 fdea42f bdf972d 41c6323 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
---
title: README
emoji: 🏃
colorFrom: gray
colorTo: purple
sdk: static
pinned: false
license: mit
---
# Model Description
TinyClinicalBERT is a distilled version of the [BioClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) which is distilled for 3 epochs using a total batch size of 192 on the MIMIC-III notes dataset.
# Distillation Procedure
This model uses a unique distillation method called ‘transformer-layer distillation’ which is applied on each layer of the student to align the attention maps and the hidden states of the student with those of the teacher.
# Architecture and Initialisation
This model uses 4 hidden layers with a hidden dimension size and an embedding size of 768 resulting in a total of 15M parameters. Due to the model's small hidden dimension size, it uses random initialisation.
# Citation
If you use this model, please consider citing the following paper:
```bibtex
@article{rohanian2023lightweight,
title={Lightweight transformers for clinical natural language processing},
author={Rohanian, Omid and Nouriborji, Mohammadmahdi and Jauncey, Hannah and Kouchaki, Samaneh and Nooralahzadeh, Farhad and Clifton, Lei and Merson, Laura and Clifton, David A and ISARIC Clinical Characterisation Group and others},
journal={Natural Language Engineering},
pages={1--28},
year={2023},
publisher={Cambridge University Press}
}
``` |