End of training
Browse files
README.md
CHANGED
@@ -1,33 +1,25 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
language:
|
4 |
-
- zh
|
5 |
license: apache-2.0
|
6 |
base_model: openai/whisper-base
|
7 |
tags:
|
8 |
-
- hf-asr-leaderboard
|
9 |
- generated_from_trainer
|
10 |
-
|
11 |
-
-
|
12 |
model-index:
|
13 |
-
- name:
|
14 |
results: []
|
15 |
---
|
16 |
|
17 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
18 |
should probably proofread and complete it, then remove this comment. -->
|
19 |
|
20 |
-
#
|
21 |
|
22 |
-
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the
|
23 |
It achieves the following results on the evaluation set:
|
24 |
-
-
|
25 |
-
-
|
26 |
-
- eval_runtime: 533.856
|
27 |
-
- eval_samples_per_second: 3.606
|
28 |
-
- eval_steps_per_second: 0.451
|
29 |
-
- epoch: 4.7718
|
30 |
-
- step: 2300
|
31 |
|
32 |
## Model description
|
33 |
|
@@ -56,6 +48,26 @@ The following hyperparameters were used during training:
|
|
56 |
- training_steps: 4000
|
57 |
- mixed_precision_training: Native AMP
|
58 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
### Framework versions
|
60 |
|
61 |
- Transformers 4.46.2
|
|
|
1 |
---
|
2 |
library_name: transformers
|
|
|
|
|
3 |
license: apache-2.0
|
4 |
base_model: openai/whisper-base
|
5 |
tags:
|
|
|
6 |
- generated_from_trainer
|
7 |
+
metrics:
|
8 |
+
- wer
|
9 |
model-index:
|
10 |
+
- name: whisper-base-zh
|
11 |
results: []
|
12 |
---
|
13 |
|
14 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
15 |
should probably proofread and complete it, then remove this comment. -->
|
16 |
|
17 |
+
# whisper-base-zh
|
18 |
|
19 |
+
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
+
- Loss: 0.3426
|
22 |
+
- Wer: 78.6221
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
## Model description
|
25 |
|
|
|
48 |
- training_steps: 4000
|
49 |
- mixed_precision_training: Native AMP
|
50 |
|
51 |
+
### Training results
|
52 |
+
|
53 |
+
| Training Loss | Epoch | Step | Validation Loss | Wer |
|
54 |
+
|:-------------:|:------:|:----:|:---------------:|:-------:|
|
55 |
+
| 0.4992 | 0.2075 | 100 | 0.4841 | 93.0091 |
|
56 |
+
| 0.4325 | 0.4149 | 200 | 0.4223 | 82.7761 |
|
57 |
+
| 0.4028 | 0.6224 | 300 | 0.3979 | 81.6616 |
|
58 |
+
| 0.3866 | 0.8299 | 400 | 0.3846 | 79.8886 |
|
59 |
+
| 0.3322 | 1.0373 | 500 | 0.3731 | 80.3951 |
|
60 |
+
| 0.3108 | 1.2448 | 600 | 0.3672 | 79.2300 |
|
61 |
+
| 0.3139 | 1.4523 | 700 | 0.3601 | 79.1287 |
|
62 |
+
| 0.324 | 1.6598 | 800 | 0.3558 | 78.7741 |
|
63 |
+
| 0.2629 | 1.8672 | 900 | 0.3525 | 78.1155 |
|
64 |
+
| 0.2421 | 2.0747 | 1000 | 0.3521 | 78.5208 |
|
65 |
+
| 0.217 | 2.2822 | 1100 | 0.3495 | 78.3688 |
|
66 |
+
| 0.2071 | 2.4896 | 1200 | 0.3490 | 78.5714 |
|
67 |
+
| 0.2183 | 2.6971 | 1300 | 0.3452 | 78.6727 |
|
68 |
+
| 0.2158 | 2.9046 | 1400 | 0.3426 | 78.6221 |
|
69 |
+
|
70 |
+
|
71 |
### Framework versions
|
72 |
|
73 |
- Transformers 4.46.2
|
runs/Nov17_04-29-12_ac7f6a829392/events.out.tfevents.1731817755.ac7f6a829392.6432.0
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:48f18de5e9fe275b25237da480022d7d8c0c00e314434fc1ed20c6500cdeb8ed
|
3 |
+
size 22498
|