lcampillos commited on
Commit
c0cbb5e
1 Parent(s): e7d5ae9

Created model card for negation/speculation model

Browse files
Files changed (1) hide show
  1. README.md +125 -0
README.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - precision
7
+ - recall
8
+ - f1
9
+ - accuracy
10
+ model-index:
11
+ - name: roberta-es-clinical-trials-neg-spec
12
+ results: []
13
+ widget:
14
+ - text: "Pacientes sanos, sin ninguna enfermedad, que no tomen ningún medicamento"
15
+ - text: "Sujetos adultos con cáncer de próstata asintomáticos y no tratados previamente"
16
+ ---
17
+
18
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
19
+ should probably proofread and complete it, then remove this comment. -->
20
+
21
+ # roberta-es-clinical-trials-neg-spec
22
+
23
+ This named entity recognition model detects negation and speculation entities, and negated and speculated concepts:
24
+ - Neg_cue: negation cue (e.g. *no*, *sin*)
25
+ - Negated: negated entity or event (e.g. *sin **dolor***)
26
+ - Spec_cue: speculation cue (e.g. *posiblemente*)
27
+ - Speculated: speculated entity or event (e.g. *posiblemente **sobreviva***)
28
+
29
+ The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds):
30
+ - Precision: 0.833 (±0.001)
31
+ - Recall: 0.870 (±0.001)
32
+ - F1: 0.851 (±0.001)
33
+ - Accuracy: 0.956 (±0.001)
34
+
35
+ ## Model description
36
+
37
+ This model adapts the pre-trained model [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es), presented in [Pio Carriño et al. (2022)](https://aclanthology.org/2022.bionlp-1.19/).
38
+ It is fine-tuned to conduct medical named entity recognition on Spanish texts about clinical trials.
39
+ The model is fine-tuned on the [NUBEs corpus (Lima et al. 2020)](https://aclanthology.org/2020.lrec-1.708/) and on the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z).
40
+
41
+ ## Intended uses & limitations
42
+
43
+ **Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision*
44
+
45
+ This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.
46
+
47
+ Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence.
48
+
49
+ The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models.
50
+
51
+ **Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas*
52
+
53
+ La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables.
54
+
55
+ Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial.
56
+
57
+ El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos.
58
+
59
+
60
+ ## Training and evaluation data
61
+
62
+ The data used for fine-tuning are:
63
+
64
+ 1) The [Negation and Uncertainty in Spanish Corpus (NUBes)](https://github.com/Vicomtech/NUBes-negation-uncertainty-biomedical-corpus):
65
+ It is a collection of 29 682 sentences (518 068 tokens) from anonymised health records in Spanish, annotated with negation and uncertainty cues and their scopes.
66
+
67
+ 2) The [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/):
68
+ It is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
69
+ - 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
70
+ - 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos
71
+
72
+ If you use the CT-EBM-ES resource, please, cite as follows:
73
+
74
+ ```
75
+ @article{campillosetal-midm2021,
76
+         title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
77
+         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
78
+         journal = {BMC Medical Informatics and Decision Making},
79
+         volume={21},
80
+ number={1},
81
+ pages={1--19},
82
+ year={2021},
83
+ publisher={BioMed Central}
84
+ }
85
+ ```
86
+
87
+
88
+
89
+ ## Training procedure
90
+
91
+ ### Training hyperparameters
92
+
93
+ The following hyperparameters were used during training:
94
+ - learning_rate: 2e-05
95
+ - train_batch_size: 16
96
+ - eval_batch_size: 16
97
+ - seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results
98
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
99
+ - lr_scheduler_type: linear
100
+ - num_epochs: 4
101
+
102
+
103
+ ### Training results (test set; average and standard deviation of 5 rounds with different seeds)
104
+
105
+ | Precision | Recall | F1 | Accuracy |
106
+ |:--------------:|:--------------:|:--------------:|:--------------:|
107
+ | 0.833 (±0.001) | 0.870 (±0.001) | 0.851 (±0.001) | 0.986 (±0.001) |
108
+
109
+
110
+ **Results per class (test set; average and standard deviation of 5 rounds with different seeds)**
111
+
112
+ | Class | Precision | Recall | F1 | Support |
113
+ |:-----------:|:--------------:|:--------------:|:--------------:|:---------:|
114
+ | Neg_cue | 0.944 (±0.001) | 0.963 (±0.002) | 0.954 (±0.001) | 2416 |
115
+ | Negated | 0.805 (±0.003) | 0.843 (±0.005) | 0.823 (±0.003) | 3064 |
116
+ | Spec_cue | 0.800 (±0.005) | 0.862 (±0.006) | 0.830 (±0.002) | 746 |
117
+ | Speculated | 0.683 (±0.009) | 0.735 (±0.010) | 0.708 (±0.009) | 993 |
118
+
119
+
120
+ ### Framework versions
121
+
122
+ - Transformers 4.17.0
123
+ - Pytorch 1.10.2+cu113
124
+ - Datasets 1.18.4
125
+ - Tokenizers 0.11.6