File size: 3,100 Bytes
b666ca1
 
4f3c7fc
 
b666ca1
4f3c7fc
4a42def
 
4f3c7fc
 
b666ca1
4f3c7fc
b666ca1
 
 
 
 
 
4f3c7fc
b666ca1
4f3c7fc
b666ca1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
library_name: transformers
language:
- spa
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Tiny All Audios - vfranchis
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# Whisper Tiny All Audios - vfranchis

This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the All audios 1.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0368
- Wer: 1.9658

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 25
- training_steps: 650
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Wer     |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 1.4661        | 0.05  | 25   | 0.5654          | 20.5212 |
| 0.2642        | 0.1   | 50   | 0.1588          | 8.6859  |
| 0.1282        | 0.15  | 75   | 0.1102          | 6.3494  |
| 0.0861        | 0.2   | 100  | 0.0901          | 4.9068  |
| 0.0652        | 0.25  | 125  | 0.0784          | 4.0738  |
| 0.0676        | 0.3   | 150  | 0.0695          | 3.4490  |
| 0.0865        | 0.35  | 175  | 0.0649          | 3.4185  |
| 0.0454        | 0.4   | 200  | 0.0610          | 3.0477  |
| 0.0517        | 0.45  | 225  | 0.0567          | 2.9664  |
| 0.0471        | 0.5   | 250  | 0.0548          | 2.8344  |
| 0.0394        | 0.55  | 275  | 0.0521          | 2.8648  |
| 0.0347        | 0.6   | 300  | 0.0488          | 2.4585  |
| 0.0596        | 0.65  | 325  | 0.0477          | 2.4483  |
| 0.0426        | 0.7   | 350  | 0.0452          | 2.7836  |
| 0.0428        | 0.75  | 375  | 0.0436          | 2.2401  |
| 0.0518        | 0.8   | 400  | 0.0417          | 2.1181  |
| 0.0379        | 0.85  | 425  | 0.0407          | 2.0928  |
| 0.0259        | 0.9   | 450  | 0.0399          | 1.9861  |
| 0.0691        | 0.95  | 475  | 0.0394          | 2.2096  |
| 0.0382        | 1.0   | 500  | 0.0384          | 2.1131  |
| 0.0311        | 1.05  | 525  | 0.0377          | 1.9810  |
| 0.0301        | 1.1   | 550  | 0.0375          | 1.9404  |
| 0.021         | 1.15  | 575  | 0.0371          | 1.9505  |
| 0.0205        | 1.2   | 600  | 0.0369          | 1.9404  |
| 0.0163        | 1.25  | 625  | 0.0369          | 1.9505  |
| 0.018         | 1.3   | 650  | 0.0368          | 1.9658  |


### Framework versions

- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1