Scrya commited on
Commit
d3f82ad
1 Parent(s): 2866a94

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - vi
4
+ license: apache-2.0
5
+ tags:
6
+ - whisper-event
7
+ - generated_from_trainer
8
+ datasets:
9
+ - common_voice_11_0
10
+ metrics:
11
+ - wer
12
+ model-index:
13
+ - name: Whisper Medium VI - Multi - Augmented
14
+ results:
15
+ - task:
16
+ name: Automatic Speech Recognition
17
+ type: automatic-speech-recognition
18
+ dataset:
19
+ name: common_voice_11_0
20
+ type: common_voice_11_0
21
+ config: vi
22
+ split: test
23
+ args: vi
24
+ metrics:
25
+ - name: Wer
26
+ type: wer
27
+ value: 16.659355121737224
28
+ ---
29
+
30
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
31
+ should probably proofread and complete it, then remove this comment. -->
32
+
33
+ # Whisper Medium VI - Multi - Augmented
34
+
35
+ This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_11_0 dataset.
36
+ It achieves the following results on the evaluation set:
37
+ - Loss: 0.3696
38
+ - Wer: 16.6594
39
+ - Cer: 7.7625
40
+
41
+ ## Model description
42
+
43
+ More information needed
44
+
45
+ ## Intended uses & limitations
46
+
47
+ More information needed
48
+
49
+ ## Training and evaluation data
50
+
51
+ More information needed
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 1e-05
59
+ - train_batch_size: 32
60
+ - eval_batch_size: 16
61
+ - seed: 42
62
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
+ - lr_scheduler_type: linear
64
+ - lr_scheduler_warmup_steps: 500
65
+ - training_steps: 5000
66
+ - mixed_precision_training: Native AMP
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
71
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|
72
+ | 0.1992 | 1.8 | 1000 | 0.2726 | 17.4929 | 8.2562 |
73
+ | 0.0402 | 3.6 | 2000 | 0.3317 | 17.4929 | 8.2588 |
74
+ | 0.0073 | 5.4 | 3000 | 0.3429 | 17.6793 | 8.8913 |
75
+ | 0.0014 | 7.19 | 4000 | 0.3599 | 19.0283 | 9.5103 |
76
+ | 0.0006 | 8.99 | 5000 | 0.3696 | 16.6594 | 7.7625 |
77
+
78
+
79
+ ### Framework versions
80
+
81
+ - Transformers 4.26.0.dev0
82
+ - Pytorch 1.13.1+cu117
83
+ - Datasets 2.7.1.dev0
84
+ - Tokenizers 0.13.2