thiagobarbosa commited on
Commit
9d9e9e6
1 Parent(s): 76ecb36

Model save

Browse files
Files changed (2) hide show
  1. README.md +13 -12
  2. pytorch_model.bin +1 -1
README.md CHANGED
@@ -17,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.4732
21
- - Wer: 18.2079
22
 
23
  ## Model description
24
 
@@ -38,29 +38,30 @@ More information needed
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 2.5e-05
41
- - train_batch_size: 8
42
- - eval_batch_size: 8
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
- - lr_scheduler_warmup_steps: 100
47
- - training_steps: 4000
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Wer |
53
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
54
- | 0.382 | 0.57 | 400 | 0.4085 | 18.3348 |
55
- | 0.1182 | 1.15 | 800 | 0.4220 | 17.7613 |
56
- | 0.0991 | 1.72 | 1200 | 0.4479 | 19.6230 |
57
- | 0.0424 | 2.3 | 1600 | 0.4551 | 18.2173 |
58
- | 0.0458 | 2.87 | 2000 | 0.4732 | 18.2079 |
 
59
 
60
 
61
  ### Framework versions
62
 
63
  - Transformers 4.36.2
64
- - Pytorch 2.1.2+cu121
65
  - Datasets 2.16.1
66
  - Tokenizers 0.15.0
 
17
 
18
  This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.6283
21
+ - Wer: 25.9071
22
 
23
  ## Model description
24
 
 
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 2.5e-05
41
+ - train_batch_size: 12
42
+ - eval_batch_size: 12
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
+ - lr_scheduler_warmup_steps: 120
47
+ - training_steps: 2400
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Wer |
53
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
54
+ | 0.0871 | 2.72 | 400 | 0.4838 | 24.4078 |
55
+ | 0.0066 | 5.44 | 800 | 0.5647 | 25.5452 |
56
+ | 0.0013 | 8.16 | 1200 | 0.5981 | 25.6110 |
57
+ | 0.0008 | 10.88 | 1600 | 0.6143 | 25.6533 |
58
+ | 0.0006 | 13.61 | 2000 | 0.6245 | 25.7661 |
59
+ | 0.0006 | 16.33 | 2400 | 0.6283 | 25.9071 |
60
 
61
 
62
  ### Framework versions
63
 
64
  - Transformers 4.36.2
65
+ - Pytorch 2.1.1
66
  - Datasets 2.16.1
67
  - Tokenizers 0.15.0
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:121a8a974e2a494c6a10492ed519d13bbe571a4cc6a690c68e0875aa6f6ecd29
3
  size 290459230
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0271d6d59002685571be5c6dc020ac3710e4987a1cd31d5ca3e2918f9a9bccb
3
  size 290459230