namngo commited on
Commit
f4e9198
1 Parent(s): 910fbff

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -18
README.md CHANGED
@@ -14,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [manhtt-079/vipubmed-deberta-xsmall](https://huggingface.co/manhtt-079/vipubmed-deberta-xsmall) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 3.6448
18
 
19
  ## Model description
20
 
@@ -33,34 +33,29 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - learning_rate: 5e-05
37
  - train_batch_size: 16
38
  - eval_batch_size: 16
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
  - lr_scheduler_warmup_ratio: 0.05
43
- - num_epochs: 15
44
 
45
  ### Training results
46
 
47
  | Training Loss | Epoch | Step | Validation Loss |
48
  |:-------------:|:-----:|:-----:|:---------------:|
49
- | 2.1442 | 1.0 | 1622 | 1.9280 |
50
- | 1.5136 | 2.0 | 3244 | 1.5406 |
51
- | 1.1747 | 3.0 | 4866 | 1.5945 |
52
- | 0.9242 | 4.0 | 6488 | 1.6185 |
53
- | 0.7439 | 5.0 | 8110 | 1.7099 |
54
- | 0.5699 | 6.0 | 9732 | 1.8345 |
55
- | 0.4549 | 7.0 | 11354 | 2.0935 |
56
- | 0.3667 | 8.0 | 12976 | 2.2295 |
57
- | 0.2881 | 9.0 | 14598 | 2.4917 |
58
- | 0.2398 | 10.0 | 16220 | 2.7296 |
59
- | 0.1911 | 11.0 | 17842 | 2.9421 |
60
- | 0.1539 | 12.0 | 19464 | 3.1193 |
61
- | 0.1273 | 13.0 | 21086 | 3.3655 |
62
- | 0.1147 | 14.0 | 22708 | 3.4853 |
63
- | 0.0957 | 15.0 | 24330 | 3.6448 |
64
 
65
 
66
  ### Framework versions
 
14
 
15
  This model is a fine-tuned version of [manhtt-079/vipubmed-deberta-xsmall](https://huggingface.co/manhtt-079/vipubmed-deberta-xsmall) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 3.0421
18
 
19
  ## Model description
20
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
+ - learning_rate: 7e-05
37
  - train_batch_size: 16
38
  - eval_batch_size: 16
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
  - lr_scheduler_warmup_ratio: 0.05
43
+ - num_epochs: 10
44
 
45
  ### Training results
46
 
47
  | Training Loss | Epoch | Step | Validation Loss |
48
  |:-------------:|:-----:|:-----:|:---------------:|
49
+ | 3.6396 | 1.0 | 1462 | 2.7284 |
50
+ | 2.4335 | 2.0 | 2924 | 2.4606 |
51
+ | 1.862 | 3.0 | 4386 | 2.1815 |
52
+ | 1.5105 | 4.0 | 5848 | 2.1607 |
53
+ | 1.2232 | 5.0 | 7310 | 2.2822 |
54
+ | 1.0172 | 6.0 | 8772 | 2.3642 |
55
+ | 0.8287 | 7.0 | 10234 | 2.5847 |
56
+ | 0.6869 | 8.0 | 11696 | 2.7425 |
57
+ | 0.5767 | 9.0 | 13158 | 2.9143 |
58
+ | 0.4978 | 10.0 | 14620 | 3.0421 |
 
 
 
 
 
59
 
60
 
61
  ### Framework versions