Jasper881108 commited on
Commit
a878b8c
1 Parent(s): 63fabdf

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - whisper-small
5
+ - asr
6
+ - zh-TW
7
+ datasets:
8
+ - mozilla-foundation/common_voice_11_0
9
+ model-index:
10
+ - name: Whisper Small TW
11
+ results:
12
+ - task:
13
+ type: automatic-speech-recognition
14
+ name: Automatic Speech Recognition
15
+ dataset:
16
+ name: mozilla-foundation/common_voice_11_0
17
+ type: mozilla-foundation/common_voice_11_0
18
+ config: zh-TW
19
+ split: test
20
+ metrics:
21
+ - type: wer
22
+ value: 9.78
23
+ name: WER
24
+ ---
25
+
26
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
27
+ should probably proofread and complete it, then remove this comment. -->
28
+
29
+ # Whisper Medium TW
30
+
31
+ This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 dataset.
32
+
33
+ ## Training and evaluation data
34
+
35
+ Training:
36
+ - [mozilla-foundation/common_voice_11_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) (train+validation)
37
+
38
+ Evaluation:
39
+ - [mozilla-foundation/common_voice_11_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) (test)
40
+
41
+ ## Training procedure
42
+
43
+ - Datasets were augmented using [audiomentations](https://github.com/iver56/audiomentations) via PitchShift, TimeStretch, Gain, AddGaussianNoise transformations at `p=0.3`.
44
+ - A space is added between each Chinese character, as demonstrated in the original paper. Effectively, WER == CER in this case.
45
+
46
+ ### Training hyperparameters
47
+
48
+ The following hyperparameters were used during training:
49
+ - learning_rate: 1e-05
50
+ - train_batch_size: 16
51
+ - eval_batch_size: 16
52
+ - gradient_accumulation_steps: 1
53
+ - optimizer: Adam
54
+ - generation_max_length: 225
55
+ - warmup_steps: 500
56
+ - max_steps: 2400
57
+ - fp16: True
58
+ - evaluation_strategy: "steps"
59
+
60
+ ### Framework versions
61
+
62
+ - Transformers 4.27.1
63
+ - Pytorch 2.0.1+cu120
64
+ - Datasets 2.13.1