aapot commited on
Commit
b457dd5
1 Parent(s): dea2a27

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -6
README.md CHANGED
@@ -37,22 +37,31 @@ should probably proofread and complete it, then remove this comment. -->
37
 
38
  # wav2vec2-xlsr-1b-finnish-lm-v2
39
 
40
- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b).
41
- It achieves the following results on the evaluation set with LM:
42
  - Wer: 4.19
43
  - Cer: 0.90
44
 
45
  ## Model description
46
 
47
- More information needed
48
 
49
  ## Intended uses & limitations
50
 
51
- More information needed
52
 
53
  ## Training and evaluation data
54
 
55
- More information needed
 
 
 
 
 
 
 
 
 
56
 
57
  ## Training procedure
58
 
@@ -63,7 +72,7 @@ The following hyperparameters were used during training:
63
  - train_batch_size: 32
64
  - eval_batch_size: 8
65
  - seed: 42
66
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
67
  - lr_scheduler_type: linear
68
  - lr_scheduler_warmup_steps: 500
69
  - num_epochs: 10
 
37
 
38
  # wav2vec2-xlsr-1b-finnish-lm-v2
39
 
40
+ This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data.
41
+ It achieves the following results on the Common Voice 7 test set together with language model (Finnish KenLM):
42
  - Wer: 4.19
43
  - Cer: 0.90
44
 
45
  ## Model description
46
 
47
+ TODO
48
 
49
  ## Intended uses & limitations
50
 
51
+ TODO
52
 
53
  ## Training and evaluation data
54
 
55
+ This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
56
+
57
+ | Dataset | Hours | % of total hours |
58
+ |:------------------------------------------------------------------------------------------------------------------------------|:--------:|:----------------:|
59
+ | [Common Voice 7.0 Finnish train+evaluation+other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
60
+ | [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
61
+ | [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
62
+ | [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
63
+ | [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
64
+ | [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
65
 
66
  ## Training procedure
67
 
 
72
  - train_batch_size: 32
73
  - eval_batch_size: 8
74
  - seed: 42
75
+ - optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
76
  - lr_scheduler_type: linear
77
  - lr_scheduler_warmup_steps: 500
78
  - num_epochs: 10