bjelkenhed
commited on
Commit
·
f1b788e
1
Parent(s):
00cd1d1
Update README.md
Browse files
README.md
CHANGED
@@ -32,14 +32,15 @@ should probably proofread and complete it, then remove this comment. -->
|
|
32 |
|
33 |
# Whisper Large Swedish
|
34 |
|
35 |
-
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on
|
36 |
-
It achieves the following results on the evaluation set
|
|
|
37 |
- Loss: 0.2337
|
38 |
- Wer: 9.2206
|
39 |
|
40 |
## Model description
|
41 |
|
42 |
-
|
43 |
|
44 |
## Intended uses & limitations
|
45 |
|
@@ -47,7 +48,8 @@ More information needed
|
|
47 |
|
48 |
## Training and evaluation data
|
49 |
|
50 |
-
|
|
|
51 |
|
52 |
## Training procedure
|
53 |
|
|
|
32 |
|
33 |
# Whisper Large Swedish
|
34 |
|
35 |
+
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) trained on NST Swedish ASR and evaluated on Common Voice 11 testset.
|
36 |
+
It achieves the following results on the evaluation set
|
37 |
+
|
38 |
- Loss: 0.2337
|
39 |
- Wer: 9.2206
|
40 |
|
41 |
## Model description
|
42 |
|
43 |
+
openai/whisper-large-v2 had a WER of 10.6 on Common Voice 9 testset.
|
44 |
|
45 |
## Intended uses & limitations
|
46 |
|
|
|
48 |
|
49 |
## Training and evaluation data
|
50 |
|
51 |
+
The training dataset contains 276 000 examples and with a batch size of 64 and training 5000 it is 1.14 epochs.
|
52 |
+
More training data or more epochs would probably improve the result even further.
|
53 |
|
54 |
## Training procedure
|
55 |
|