File size: 2,587 Bytes
17b0f60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8fe3378
 
17b0f60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8fe3378
17b0f60
 
 
 
 
8fe3378
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17b0f60
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
language:
- en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: openai/whisper-tiny
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# openai/whisper-tiny

This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the pphuc25/EngMed dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4853
- Wer: 66.7045

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Wer     |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 1.2768        | 1.0   | 2506  | 1.8614          | 66.7187 |
| 0.9077        | 2.0   | 5012  | 1.9243          | 66.6383 |
| 0.6044        | 3.0   | 7518  | 2.0816          | 71.3474 |
| 0.4112        | 4.0   | 10024 | 2.2672          | 69.5064 |
| 0.2788        | 5.0   | 12530 | 2.4746          | 65.6728 |
| 0.1736        | 6.0   | 15036 | 2.6423          | 68.1196 |
| 0.096         | 7.0   | 17542 | 2.7603          | 66.7897 |
| 0.0632        | 8.0   | 20048 | 2.9008          | 68.4746 |
| 0.046         | 9.0   | 22554 | 3.0145          | 69.8850 |
| 0.0338        | 10.0  | 25060 | 3.0977          | 66.8749 |
| 0.0203        | 11.0  | 27566 | 3.1614          | 67.5186 |
| 0.0207        | 12.0  | 30072 | 3.2117          | 65.2847 |
| 0.011         | 13.0  | 32578 | 3.3028          | 66.4253 |
| 0.007         | 14.0  | 35084 | 3.3854          | 68.1102 |
| 0.0071        | 15.0  | 37590 | 3.3962          | 66.8702 |
| 0.0041        | 16.0  | 40096 | 3.4312          | 66.8323 |
| 0.0043        | 17.0  | 42602 | 3.4244          | 66.5294 |
| 0.0036        | 18.0  | 45108 | 3.4340          | 66.8512 |
| 0.0019        | 19.0  | 47614 | 3.4810          | 67.6558 |
| 0.0003        | 20.0  | 50120 | 3.4853          | 66.7045 |


### Framework versions

- Transformers 4.41.1
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1