parambharat commited on
Commit
40e9491
·
1 Parent(s): 5d2c21c

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -13
README.md CHANGED
@@ -13,12 +13,12 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  This model was trained from scratch on the None dataset.
15
  It achieves the following results on the evaluation set:
16
- - eval_loss: 0.1632
17
- - eval_wer: 39.0511
18
- - eval_runtime: 307.7229
19
- - eval_samples_per_second: 0.325
20
- - eval_steps_per_second: 0.029
21
- - epoch: 3.02
22
  - step: 500
23
 
24
  ## Model description
@@ -39,20 +39,18 @@ More information needed
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 1e-05
42
- - train_batch_size: 24
43
- - eval_batch_size: 12
44
  - seed: 42
45
- - gradient_accumulation_steps: 2
46
- - total_train_batch_size: 48
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
  - lr_scheduler_warmup_steps: 500
50
- - training_steps: 5000
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Framework versions
54
 
55
  - Transformers 4.26.0.dev0
56
- - Pytorch 1.11.0
57
  - Datasets 2.7.1.dev0
58
- - Tokenizers 0.12.1
 
13
 
14
  This model was trained from scratch on the None dataset.
15
  It achieves the following results on the evaluation set:
16
+ - eval_loss: 0.1630
17
+ - eval_wer: 35.4015
18
+ - eval_runtime: 151.2981
19
+ - eval_samples_per_second: 0.661
20
+ - eval_steps_per_second: 0.026
21
+ - epoch: 4.03
22
  - step: 500
23
 
24
  ## Model description
 
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 1e-05
42
+ - train_batch_size: 64
43
+ - eval_batch_size: 32
44
  - seed: 42
 
 
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 500
48
+ - training_steps: 3000
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Framework versions
52
 
53
  - Transformers 4.26.0.dev0
54
+ - Pytorch 1.13.0+cu117
55
  - Datasets 2.7.1.dev0
56
+ - Tokenizers 0.13.2