jinunyachhyon commited on
Commit
34b2bd1
·
verified ·
1 Parent(s): c7526c6

End of training

Browse files
Files changed (2) hide show
  1. README.md +66 -0
  2. generation_config.json +149 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: openai/whisper-tiny.en
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - wer
9
+ model-index:
10
+ - name: whisper-tiny-eng-transliterated-nep
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # whisper-tiny-eng-transliterated-nep
18
+
19
+ This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.7625
22
+ - Wer: 48.5073
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 5e-05
42
+ - train_batch_size: 4
43
+ - eval_batch_size: 4
44
+ - seed: 42
45
+ - gradient_accumulation_steps: 4
46
+ - total_train_batch_size: 16
47
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
+ - lr_scheduler_type: linear
49
+ - lr_scheduler_warmup_steps: 200
50
+ - training_steps: 500
51
+ - mixed_precision_training: Native AMP
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
56
+ |:-------------:|:------:|:----:|:---------------:|:-------:|
57
+ | 1.9083 | 3.0121 | 250 | 0.8480 | 55.6012 |
58
+ | 0.248 | 6.0242 | 500 | 0.7625 | 48.5073 |
59
+
60
+
61
+ ### Framework versions
62
+
63
+ - Transformers 4.47.1
64
+ - Pytorch 2.4.1+cu121
65
+ - Datasets 3.2.0
66
+ - Tokenizers 0.21.0
generation_config.json ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alignment_heads": [
3
+ [
4
+ 1,
5
+ 0
6
+ ],
7
+ [
8
+ 2,
9
+ 0
10
+ ],
11
+ [
12
+ 2,
13
+ 5
14
+ ],
15
+ [
16
+ 3,
17
+ 0
18
+ ],
19
+ [
20
+ 3,
21
+ 1
22
+ ],
23
+ [
24
+ 3,
25
+ 2
26
+ ],
27
+ [
28
+ 3,
29
+ 3
30
+ ],
31
+ [
32
+ 3,
33
+ 4
34
+ ]
35
+ ],
36
+ "begin_suppress_tokens": [
37
+ 220,
38
+ 50256
39
+ ],
40
+ "bos_token_id": 50257,
41
+ "decoder_start_token_id": 50257,
42
+ "eos_token_id": 50256,
43
+ "forced_decoder_ids": [
44
+ [
45
+ 1,
46
+ 50362
47
+ ]
48
+ ],
49
+ "is_multilingual": false,
50
+ "max_initial_timestamp_index": 50,
51
+ "max_length": 448,
52
+ "no_timestamps_token_id": 50362,
53
+ "pad_token_id": 50256,
54
+ "prev_sot_token_id": 50360,
55
+ "return_timestamps": false,
56
+ "suppress_tokens": [
57
+ 1,
58
+ 2,
59
+ 7,
60
+ 8,
61
+ 9,
62
+ 10,
63
+ 14,
64
+ 25,
65
+ 26,
66
+ 27,
67
+ 28,
68
+ 29,
69
+ 31,
70
+ 58,
71
+ 59,
72
+ 60,
73
+ 61,
74
+ 62,
75
+ 63,
76
+ 90,
77
+ 91,
78
+ 92,
79
+ 93,
80
+ 357,
81
+ 366,
82
+ 438,
83
+ 532,
84
+ 685,
85
+ 705,
86
+ 796,
87
+ 930,
88
+ 1058,
89
+ 1220,
90
+ 1267,
91
+ 1279,
92
+ 1303,
93
+ 1343,
94
+ 1377,
95
+ 1391,
96
+ 1635,
97
+ 1782,
98
+ 1875,
99
+ 2162,
100
+ 2361,
101
+ 2488,
102
+ 3467,
103
+ 4008,
104
+ 4211,
105
+ 4600,
106
+ 4808,
107
+ 5299,
108
+ 5855,
109
+ 6329,
110
+ 7203,
111
+ 9609,
112
+ 9959,
113
+ 10563,
114
+ 10786,
115
+ 11420,
116
+ 11709,
117
+ 11907,
118
+ 13163,
119
+ 13697,
120
+ 13700,
121
+ 14808,
122
+ 15306,
123
+ 16410,
124
+ 16791,
125
+ 17992,
126
+ 19203,
127
+ 19510,
128
+ 20724,
129
+ 22305,
130
+ 22935,
131
+ 27007,
132
+ 30109,
133
+ 30420,
134
+ 33409,
135
+ 34949,
136
+ 40283,
137
+ 40493,
138
+ 40549,
139
+ 47282,
140
+ 49146,
141
+ 50257,
142
+ 50357,
143
+ 50358,
144
+ 50359,
145
+ 50360,
146
+ 50361
147
+ ],
148
+ "transformers_version": "4.47.1"
149
+ }