Martha-987 commited on
Commit
e51db1d
·
1 Parent(s): 7ff8b89

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -7
README.md CHANGED
@@ -1,39 +1,70 @@
 
1
  ---
2
- -Whisper Small Ar- Martha
 
3
  This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:
4
 
5
- Loss: 0.5854
 
6
  Wer: 70.2071
 
7
  Model description
 
8
  More information needed
9
 
10
  Intended uses & limitations
 
11
  More information needed
12
 
13
  Training and evaluation data
 
14
  More information needed
15
 
16
  Training procedure
 
17
  Training hyperparameters
 
18
  The following hyperparameters were used during training:
19
 
 
20
  learning_rate: 1e-05
 
21
  train_batch_size: 16
 
22
  eval_batch_size: 8
 
23
  seed: 42
 
24
  optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 
25
  lr_scheduler_type: linear
 
26
  lr_scheduler_warmup_steps: 500
 
27
  training_steps: 500
 
28
  mixed_precision_training: Native AMP
 
29
  Training results
30
- Training Loss Epoch Step Validation Loss Wer
31
- 0.9692 0.14 125 1.3372 173.0952
32
- 0.5716 0.29 250 0.9058 148.6795
33
- 0.3297 0.43 375 0.5825 63.6709
34
- 0.3083 0.57 500 0.5854 70.2071
 
 
 
 
 
 
 
35
  Framework versions
 
36
  Transformers 4.26.0.dev0
 
37
  Pytorch 1.13.0+cu116
 
38
  Datasets 2.7.1
 
39
  Tokenizers 0.13.2
 
 
1
+
2
  ---
3
+ Whisper Small Ar- Martha:
4
+
5
  This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:
6
 
7
+ Loss: 0.5854
8
+
9
  Wer: 70.2071
10
+
11
  Model description
12
+
13
  More information needed
14
 
15
  Intended uses & limitations
16
+
17
  More information needed
18
 
19
  Training and evaluation data
20
+
21
  More information needed
22
 
23
  Training procedure
24
+
25
  Training hyperparameters
26
+
27
  The following hyperparameters were used during training:
28
 
29
+
30
  learning_rate: 1e-05
31
+
32
  train_batch_size: 16
33
+
34
  eval_batch_size: 8
35
+
36
  seed: 42
37
+
38
  optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
39
+
40
  lr_scheduler_type: linear
41
+
42
  lr_scheduler_warmup_steps: 500
43
+
44
  training_steps: 500
45
+
46
  mixed_precision_training: Native AMP
47
+
48
  Training results
49
+
50
+ Training Loss Epoch Step Validation Loss Wer
51
+
52
+ 0.9692 0.14 125 1.3372 173.0952
53
+
54
+ 0.5716 0.29 250 0.9058 148.6795
55
+
56
+ 0.3297 0.43 375 0.5825 63.6709
57
+
58
+ 0.3083 0.57 500 0.5854 70.2071
59
+
60
+
61
  Framework versions
62
+
63
  Transformers 4.26.0.dev0
64
+
65
  Pytorch 1.13.0+cu116
66
+
67
  Datasets 2.7.1
68
+
69
  Tokenizers 0.13.2
70
+