alex-miller commited on
Commit
b9ee55b
·
verified ·
1 Parent(s): d1263dc

End of training

Browse files
Files changed (1) hide show
  1. README.md +26 -16
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [alex-miller/ODABert](https://huggingface.co/alex-miller/ODABert) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.0271
20
 
21
  ## Model description
22
 
@@ -35,33 +35,43 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - learning_rate: 8e-07
39
- - train_batch_size: 4
40
- - eval_batch_size: 4
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
- - num_epochs: 10
45
 
46
  ### Training results
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
- | 0.0435 | 1.0 | 754 | 0.0345 |
51
- | 0.0363 | 2.0 | 1508 | 0.0303 |
52
- | 0.0307 | 3.0 | 2262 | 0.0281 |
53
- | 0.0276 | 4.0 | 3016 | 0.0276 |
54
- | 0.0259 | 5.0 | 3770 | 0.0274 |
55
- | 0.0244 | 6.0 | 4524 | 0.0268 |
56
- | 0.0233 | 7.0 | 5278 | 0.0275 |
57
- | 0.0228 | 8.0 | 6032 | 0.0270 |
58
- | 0.0221 | 9.0 | 6786 | 0.0272 |
59
- | 0.0222 | 10.0 | 7540 | 0.0271 |
 
 
 
 
 
 
 
 
 
 
60
 
61
 
62
  ### Framework versions
63
 
64
  - Transformers 4.44.2
65
  - Pytorch 2.4.1+cu121
66
- - Datasets 2.21.0
67
  - Tokenizers 0.19.1
 
16
 
17
  This model is a fine-tuned version of [alex-miller/ODABert](https://huggingface.co/alex-miller/ODABert) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.0272
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - learning_rate: 2e-06
39
+ - train_batch_size: 24
40
+ - eval_batch_size: 24
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
+ - num_epochs: 20
45
 
46
  ### Training results
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
+ | 0.0614 | 1.0 | 75 | 0.0637 |
51
+ | 0.053 | 2.0 | 150 | 0.0520 |
52
+ | 0.0409 | 3.0 | 225 | 0.0377 |
53
+ | 0.0319 | 4.0 | 300 | 0.0333 |
54
+ | 0.028 | 5.0 | 375 | 0.0313 |
55
+ | 0.0261 | 6.0 | 450 | 0.0303 |
56
+ | 0.0243 | 7.0 | 525 | 0.0296 |
57
+ | 0.0231 | 8.0 | 600 | 0.0293 |
58
+ | 0.0217 | 9.0 | 675 | 0.0288 |
59
+ | 0.0214 | 10.0 | 750 | 0.0282 |
60
+ | 0.0205 | 11.0 | 825 | 0.0280 |
61
+ | 0.02 | 12.0 | 900 | 0.0279 |
62
+ | 0.019 | 13.0 | 975 | 0.0277 |
63
+ | 0.0185 | 14.0 | 1050 | 0.0276 |
64
+ | 0.0182 | 15.0 | 1125 | 0.0276 |
65
+ | 0.0179 | 16.0 | 1200 | 0.0274 |
66
+ | 0.0176 | 17.0 | 1275 | 0.0274 |
67
+ | 0.0175 | 18.0 | 1350 | 0.0273 |
68
+ | 0.0174 | 19.0 | 1425 | 0.0272 |
69
+ | 0.0172 | 20.0 | 1500 | 0.0272 |
70
 
71
 
72
  ### Framework versions
73
 
74
  - Transformers 4.44.2
75
  - Pytorch 2.4.1+cu121
76
+ - Datasets 3.0.1
77
  - Tokenizers 0.19.1