ljnlonoljpiljm commited on
Commit
0bb6265
1 Parent(s): 578d3ff

End of training

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -37,12 +37,12 @@ The following hyperparameters were used during training:
37
  - train_batch_size: 16
38
  - eval_batch_size: 8
39
  - seed: 42
40
- - gradient_accumulation_steps: 32
41
- - total_train_batch_size: 512
42
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_ratio: 0.1
45
- - num_epochs: 10
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
 
37
  - train_batch_size: 16
38
  - eval_batch_size: 8
39
  - seed: 42
40
+ - gradient_accumulation_steps: 8
41
+ - total_train_batch_size: 128
42
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_ratio: 0.1
45
+ - num_epochs: 100
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results