sofia-todeschini commited on
Commit
e7fca1a
1 Parent(s): 616f895

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ metrics:
5
+ - accuracy
6
+ model-index:
7
+ - name: BioBERT-LitCovid-1.4
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # BioBERT-LitCovid-1.4
15
+
16
+ This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.5756
19
+ - Hamming loss: 0.0802
20
+ - F1 micro: 0.6160
21
+ - F1 macro: 0.4740
22
+ - F1 weighted: 0.6962
23
+ - F1 samples: 0.6217
24
+ - Precision micro: 0.4710
25
+ - Precision macro: 0.3578
26
+ - Precision weighted: 0.6089
27
+ - Precision samples: 0.5156
28
+ - Recall micro: 0.8901
29
+ - Recall macro: 0.8404
30
+ - Recall weighted: 0.8901
31
+ - Recall samples: 0.9055
32
+ - Roc Auc: 0.9061
33
+ - Accuracy: 0.0775
34
+
35
+ ## Model description
36
+
37
+ More information needed
38
+
39
+ ## Intended uses & limitations
40
+
41
+ More information needed
42
+
43
+ ## Training and evaluation data
44
+
45
+ More information needed
46
+
47
+ ## Training procedure
48
+
49
+ ### Training hyperparameters
50
+
51
+ The following hyperparameters were used during training:
52
+ - learning_rate: 2e-05
53
+ - train_batch_size: 16
54
+ - eval_batch_size: 16
55
+ - seed: 42
56
+ - gradient_accumulation_steps: 2
57
+ - total_train_batch_size: 32
58
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
+ - lr_scheduler_type: linear
60
+ - num_epochs: 5
61
+ - mixed_precision_training: Native AMP
62
+
63
+ ### Training results
64
+
65
+ | Training Loss | Epoch | Step | Validation Loss | Hamming loss | F1 micro | F1 macro | F1 weighted | F1 samples | Precision micro | Precision macro | Precision weighted | Precision samples | Recall micro | Recall macro | Recall weighted | Recall samples | Roc Auc | Accuracy |
66
+ |:-------------:|:-----:|:----:|:---------------:|:------------:|:--------:|:--------:|:-----------:|:----------:|:---------------:|:---------------:|:------------------:|:-----------------:|:------------:|:------------:|:---------------:|:--------------:|:-------:|:--------:|
67
+ | 0.6673 | 1.0 | 1151 | 0.6365 | 0.1262 | 0.5023 | 0.3822 | 0.6341 | 0.5084 | 0.3513 | 0.2799 | 0.5428 | 0.3829 | 0.8808 | 0.8538 | 0.8808 | 0.8981 | 0.8770 | 0.0088 |
68
+ | 0.5371 | 2.0 | 2303 | 0.5721 | 0.1080 | 0.5442 | 0.4060 | 0.6607 | 0.5578 | 0.3916 | 0.2993 | 0.5701 | 0.4391 | 0.8917 | 0.8644 | 0.8917 | 0.9074 | 0.8919 | 0.0365 |
69
+ | 0.4628 | 3.0 | 3454 | 0.5620 | 0.0940 | 0.5780 | 0.4370 | 0.6776 | 0.5874 | 0.4280 | 0.3248 | 0.5909 | 0.4739 | 0.8899 | 0.8572 | 0.8899 | 0.9054 | 0.8986 | 0.0510 |
70
+ | 0.3925 | 4.0 | 4606 | 0.5744 | 0.0796 | 0.6160 | 0.4742 | 0.6960 | 0.6208 | 0.4728 | 0.3591 | 0.6113 | 0.5160 | 0.8837 | 0.8377 | 0.8837 | 0.9004 | 0.9035 | 0.0752 |
71
+ | 0.3647 | 5.0 | 5755 | 0.5756 | 0.0802 | 0.6160 | 0.4740 | 0.6962 | 0.6217 | 0.4710 | 0.3578 | 0.6089 | 0.5156 | 0.8901 | 0.8404 | 0.8901 | 0.9055 | 0.9061 | 0.0775 |
72
+
73
+
74
+ ### Framework versions
75
+
76
+ - Transformers 4.28.0
77
+ - Pytorch 2.3.0+cu121
78
+ - Datasets 2.20.0
79
+ - Tokenizers 0.13.3