evanarlian commited on
Commit
8632566
·
1 Parent(s): 8b8c317

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -50
README.md CHANGED
@@ -17,7 +17,7 @@ model-index:
17
  metrics:
18
  - name: Wer
19
  type: wer
20
- value: 0.3199428097039019
21
  ---
22
 
23
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -25,10 +25,10 @@ should probably proofread and complete it, then remove this comment. -->
25
 
26
  # wav2vec2-xls-r-164m-id
27
 
28
- This model is a fine-tuned version of [evanarlian/distil-wav2vec2-xls-r-164m-id](https://huggingface.co/evanarlian/distil-wav2vec2-xls-r-164m-id) on the evanarlian/common_voice_11_0_id_filtered dataset.
29
  It achieves the following results on the evaluation set:
30
- - Loss: 0.3215
31
- - Wer: 0.3199
32
 
33
  ## Model description
34
 
@@ -47,63 +47,47 @@ More information needed
47
  ### Training hyperparameters
48
 
49
  The following hyperparameters were used during training:
50
- - learning_rate: 0.0002
51
  - train_batch_size: 24
52
  - eval_batch_size: 24
53
  - seed: 42
54
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
  - lr_scheduler_type: linear
56
- - lr_scheduler_warmup_ratio: 0.3
57
- - num_epochs: 40.0
58
  - mixed_precision_training: Native AMP
59
 
60
  ### Training results
61
 
62
  | Training Loss | Epoch | Step | Validation Loss | Wer |
63
  |:-------------:|:-----:|:-----:|:---------------:|:------:|
64
- | 3.5445 | 0.92 | 1000 | 3.0106 | 1.0000 |
65
- | 2.5067 | 1.84 | 2000 | 1.6134 | 0.9905 |
66
- | 1.0279 | 2.75 | 3000 | 0.7667 | 0.8217 |
67
- | 0.7823 | 3.67 | 4000 | 0.6141 | 0.7224 |
68
- | 0.6504 | 4.59 | 5000 | 0.5228 | 0.6503 |
69
- | 0.5687 | 5.51 | 6000 | 0.4666 | 0.5963 |
70
- | 0.5026 | 6.43 | 7000 | 0.4288 | 0.5612 |
71
- | 0.4584 | 7.35 | 8000 | 0.4048 | 0.5267 |
72
- | 0.4193 | 8.26 | 9000 | 0.4057 | 0.5218 |
73
- | 0.3931 | 9.18 | 10000 | 0.3820 | 0.4813 |
74
- | 0.3651 | 10.1 | 11000 | 0.3686 | 0.4709 |
75
- | 0.3526 | 11.02 | 12000 | 0.3665 | 0.4655 |
76
- | 0.3333 | 11.94 | 13000 | 0.3440 | 0.4485 |
77
- | 0.3095 | 12.86 | 14000 | 0.3314 | 0.4331 |
78
- | 0.2802 | 13.77 | 15000 | 0.3360 | 0.4157 |
79
- | 0.2724 | 14.69 | 16000 | 0.3331 | 0.4107 |
80
- | 0.2488 | 15.61 | 17000 | 0.3255 | 0.4037 |
81
- | 0.231 | 16.53 | 18000 | 0.3089 | 0.3950 |
82
- | 0.2146 | 17.45 | 19000 | 0.3398 | 0.3990 |
83
- | 0.2103 | 18.37 | 20000 | 0.3080 | 0.3805 |
84
- | 0.2035 | 19.28 | 21000 | 0.3158 | 0.3828 |
85
- | 0.1933 | 20.2 | 22000 | 0.3118 | 0.3728 |
86
- | 0.1839 | 21.12 | 23000 | 0.3076 | 0.3690 |
87
- | 0.1791 | 22.04 | 24000 | 0.3041 | 0.3658 |
88
- | 0.1696 | 22.96 | 25000 | 0.3092 | 0.3603 |
89
- | 0.1608 | 23.88 | 26000 | 0.2936 | 0.3555 |
90
- | 0.1568 | 24.79 | 27000 | 0.2936 | 0.3560 |
91
- | 0.1456 | 25.71 | 28000 | 0.3257 | 0.3543 |
92
- | 0.1399 | 26.63 | 29000 | 0.3100 | 0.3424 |
93
- | 0.1345 | 27.55 | 30000 | 0.3172 | 0.3472 |
94
- | 0.1264 | 28.47 | 31000 | 0.3276 | 0.3412 |
95
- | 0.1289 | 29.38 | 32000 | 0.3104 | 0.3401 |
96
- | 0.1246 | 30.3 | 33000 | 0.3204 | 0.3352 |
97
- | 0.1156 | 31.22 | 34000 | 0.3013 | 0.3353 |
98
- | 0.1143 | 32.14 | 35000 | 0.3102 | 0.3322 |
99
- | 0.1152 | 33.06 | 36000 | 0.3240 | 0.3323 |
100
- | 0.1093 | 33.98 | 37000 | 0.3105 | 0.3295 |
101
- | 0.101 | 34.89 | 38000 | 0.3112 | 0.3263 |
102
- | 0.1017 | 35.81 | 39000 | 0.3263 | 0.3239 |
103
- | 0.0915 | 36.73 | 40000 | 0.3176 | 0.3226 |
104
- | 0.0943 | 37.65 | 41000 | 0.3141 | 0.3210 |
105
- | 0.0898 | 38.57 | 42000 | 0.3177 | 0.3183 |
106
- | 0.0923 | 39.49 | 43000 | 0.3215 | 0.3199 |
107
 
108
 
109
  ### Framework versions
 
17
  metrics:
18
  - name: Wer
19
  type: wer
20
+ value: 0.2990499031454663
21
  ---
22
 
23
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
25
 
26
  # wav2vec2-xls-r-164m-id
27
 
28
+ This model is a fine-tuned version of [evanarlian/wav2vec2-xls-r-164m-id](https://huggingface.co/evanarlian/wav2vec2-xls-r-164m-id) on the evanarlian/common_voice_11_0_id_filtered dataset.
29
  It achieves the following results on the evaluation set:
30
+ - Loss: 0.3510
31
+ - Wer: 0.2990
32
 
33
  ## Model description
34
 
 
47
  ### Training hyperparameters
48
 
49
  The following hyperparameters were used during training:
50
+ - learning_rate: 5e-05
51
  - train_batch_size: 24
52
  - eval_batch_size: 24
53
  - seed: 42
54
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
  - lr_scheduler_type: linear
56
+ - lr_scheduler_warmup_ratio: 0.2
57
+ - num_epochs: 50.0
58
  - mixed_precision_training: Native AMP
59
 
60
  ### Training results
61
 
62
  | Training Loss | Epoch | Step | Validation Loss | Wer |
63
  |:-------------:|:-----:|:-----:|:---------------:|:------:|
64
+ | 0.089 | 1.84 | 2000 | 0.3205 | 0.3168 |
65
+ | 0.0882 | 3.67 | 4000 | 0.3243 | 0.3203 |
66
+ | 0.0868 | 5.51 | 6000 | 0.3272 | 0.3183 |
67
+ | 0.0926 | 7.35 | 8000 | 0.3365 | 0.3209 |
68
+ | 0.0943 | 9.18 | 10000 | 0.3400 | 0.3221 |
69
+ | 0.0979 | 11.02 | 12000 | 0.3269 | 0.3192 |
70
+ | 0.09 | 12.86 | 14000 | 0.3384 | 0.3164 |
71
+ | 0.0877 | 14.69 | 16000 | 0.3284 | 0.3183 |
72
+ | 0.0808 | 16.53 | 18000 | 0.3366 | 0.3189 |
73
+ | 0.0835 | 18.37 | 20000 | 0.3306 | 0.3156 |
74
+ | 0.08 | 20.2 | 22000 | 0.3384 | 0.3133 |
75
+ | 0.0806 | 22.04 | 24000 | 0.3307 | 0.3109 |
76
+ | 0.0749 | 23.88 | 26000 | 0.3493 | 0.3118 |
77
+ | 0.073 | 25.71 | 28000 | 0.3479 | 0.3088 |
78
+ | 0.0754 | 27.55 | 30000 | 0.3482 | 0.3109 |
79
+ | 0.0697 | 29.38 | 32000 | 0.3515 | 0.3090 |
80
+ | 0.07 | 31.22 | 34000 | 0.3532 | 0.3101 |
81
+ | 0.0672 | 33.06 | 36000 | 0.3668 | 0.3086 |
82
+ | 0.0713 | 34.89 | 38000 | 0.3560 | 0.3048 |
83
+ | 0.0637 | 36.73 | 40000 | 0.3522 | 0.3028 |
84
+ | 0.0695 | 38.57 | 42000 | 0.3407 | 0.3014 |
85
+ | 0.0657 | 40.4 | 44000 | 0.3456 | 0.3025 |
86
+ | 0.0598 | 42.24 | 46000 | 0.3498 | 0.3013 |
87
+ | 0.059 | 44.08 | 48000 | 0.3563 | 0.3012 |
88
+ | 0.0645 | 45.91 | 50000 | 0.3514 | 0.3002 |
89
+ | 0.0595 | 47.75 | 52000 | 0.3545 | 0.3000 |
90
+ | 0.064 | 49.59 | 54000 | 0.3510 | 0.2990 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91
 
92
 
93
  ### Framework versions