Shamik commited on
Commit
0e79e65
1 Parent(s): c0b7027

End of training

Browse files
Files changed (2) hide show
  1. README.md +83 -0
  2. generation_config.json +195 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: mit
5
+ base_model: distil-whisper/distil-small.en
6
+ tags:
7
+ - generated_from_trainer
8
+ datasets:
9
+ - PolyAI/minds14
10
+ metrics:
11
+ - wer
12
+ model-index:
13
+ - name: Distil Whisper Small finetuned on PolyAI Minds14 English US.
14
+ results:
15
+ - task:
16
+ name: Automatic Speech Recognition
17
+ type: automatic-speech-recognition
18
+ dataset:
19
+ name: Speech Transcription in English from e-banking domain.
20
+ type: PolyAI/minds14
21
+ config: en-US
22
+ split: train
23
+ args: en-US
24
+ metrics:
25
+ - name: Wer
26
+ type: wer
27
+ value: 0.3318442884492661
28
+ ---
29
+
30
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
31
+ should probably proofread and complete it, then remove this comment. -->
32
+
33
+ # Distil Whisper Small finetuned on PolyAI Minds14 English US.
34
+
35
+ This model is a fine-tuned version of [distil-whisper/distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) on the Speech Transcription in English from e-banking domain. dataset.
36
+ It achieves the following results on the evaluation set:
37
+ - Loss: 1.0182
38
+ - Wer Ortho: 0.3371
39
+ - Wer: 0.3318
40
+
41
+ ## Model description
42
+
43
+ More information needed
44
+
45
+ ## Intended uses & limitations
46
+
47
+ More information needed
48
+
49
+ ## Training and evaluation data
50
+
51
+ More information needed
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 1e-05
59
+ - train_batch_size: 16
60
+ - eval_batch_size: 16
61
+ - seed: 42
62
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
+ - lr_scheduler_type: constant_with_warmup
64
+ - lr_scheduler_warmup_steps: 50
65
+ - training_steps: 400
66
+ - mixed_precision_training: Native AMP
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
71
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
72
+ | 0.2325 | 3.57 | 100 | 0.6222 | 0.3557 | 0.3472 |
73
+ | 0.0196 | 7.14 | 200 | 0.8475 | 0.3757 | 0.3689 |
74
+ | 0.0014 | 10.71 | 300 | 0.9729 | 0.3630 | 0.3555 |
75
+ | 0.0006 | 14.29 | 400 | 1.0182 | 0.3371 | 0.3318 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - Transformers 4.36.0
81
+ - Pytorch 2.0.0
82
+ - Datasets 2.1.0
83
+ - Tokenizers 0.15.0
generation_config.json ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alignment_heads": [
3
+ [
4
+ 6,
5
+ 6
6
+ ],
7
+ [
8
+ 7,
9
+ 0
10
+ ],
11
+ [
12
+ 7,
13
+ 3
14
+ ],
15
+ [
16
+ 7,
17
+ 8
18
+ ],
19
+ [
20
+ 8,
21
+ 2
22
+ ],
23
+ [
24
+ 8,
25
+ 5
26
+ ],
27
+ [
28
+ 8,
29
+ 7
30
+ ],
31
+ [
32
+ 9,
33
+ 0
34
+ ],
35
+ [
36
+ 9,
37
+ 4
38
+ ],
39
+ [
40
+ 9,
41
+ 8
42
+ ],
43
+ [
44
+ 9,
45
+ 10
46
+ ],
47
+ [
48
+ 10,
49
+ 0
50
+ ],
51
+ [
52
+ 10,
53
+ 1
54
+ ],
55
+ [
56
+ 10,
57
+ 2
58
+ ],
59
+ [
60
+ 10,
61
+ 3
62
+ ],
63
+ [
64
+ 10,
65
+ 6
66
+ ],
67
+ [
68
+ 10,
69
+ 11
70
+ ],
71
+ [
72
+ 11,
73
+ 2
74
+ ],
75
+ [
76
+ 11,
77
+ 4
78
+ ]
79
+ ],
80
+ "begin_suppress_tokens": [
81
+ 220,
82
+ 50256
83
+ ],
84
+ "bos_token_id": 50257,
85
+ "decoder_start_token_id": 50257,
86
+ "eos_token_id": 50256,
87
+ "forced_decoder_ids": [
88
+ [
89
+ 1,
90
+ 50362
91
+ ]
92
+ ],
93
+ "is_multilingual": false,
94
+ "language": "<|en|>",
95
+ "max_initial_timestamp_index": 1,
96
+ "max_length": 448,
97
+ "no_timestamps_token_id": 50362,
98
+ "pad_token_id": 50256,
99
+ "return_timestamps": false,
100
+ "suppress_tokens": [
101
+ 1,
102
+ 2,
103
+ 7,
104
+ 8,
105
+ 9,
106
+ 10,
107
+ 14,
108
+ 25,
109
+ 26,
110
+ 27,
111
+ 28,
112
+ 29,
113
+ 31,
114
+ 58,
115
+ 59,
116
+ 60,
117
+ 61,
118
+ 62,
119
+ 63,
120
+ 90,
121
+ 91,
122
+ 92,
123
+ 93,
124
+ 357,
125
+ 366,
126
+ 438,
127
+ 532,
128
+ 685,
129
+ 705,
130
+ 796,
131
+ 930,
132
+ 1058,
133
+ 1220,
134
+ 1267,
135
+ 1279,
136
+ 1303,
137
+ 1343,
138
+ 1377,
139
+ 1391,
140
+ 1635,
141
+ 1782,
142
+ 1875,
143
+ 2162,
144
+ 2361,
145
+ 2488,
146
+ 3467,
147
+ 4008,
148
+ 4211,
149
+ 4600,
150
+ 4808,
151
+ 5299,
152
+ 5855,
153
+ 6329,
154
+ 7203,
155
+ 9609,
156
+ 9959,
157
+ 10563,
158
+ 10786,
159
+ 11420,
160
+ 11709,
161
+ 11907,
162
+ 13163,
163
+ 13697,
164
+ 13700,
165
+ 14808,
166
+ 15306,
167
+ 16410,
168
+ 16791,
169
+ 17992,
170
+ 19203,
171
+ 19510,
172
+ 20724,
173
+ 22305,
174
+ 22935,
175
+ 27007,
176
+ 30109,
177
+ 30420,
178
+ 33409,
179
+ 34949,
180
+ 40283,
181
+ 40493,
182
+ 40549,
183
+ 47282,
184
+ 49146,
185
+ 50257,
186
+ 50357,
187
+ 50358,
188
+ 50359,
189
+ 50360,
190
+ 50361
191
+ ],
192
+ "task": "transcribe",
193
+ "transformers_version": "4.36.0",
194
+ "use_scan": false
195
+ }