cladsu commited on
Commit
b0f4122
1 Parent(s): 4954fc0

Model save

Browse files
README.md CHANGED
@@ -18,8 +18,8 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 2.1441
22
- - Wer: 83.1511
23
 
24
  ## Model description
25
 
@@ -39,28 +39,30 @@ More information needed
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 1e-05
42
- - train_batch_size: 1
43
- - eval_batch_size: 1
44
  - seed: 42
45
  - gradient_accumulation_steps: 16
46
- - total_train_batch_size: 16
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
  - lr_scheduler_warmup_steps: 1
50
- - training_steps: 10
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Training results
54
 
55
- | Training Loss | Epoch | Step | Validation Loss | Wer |
56
- |:-------------:|:------:|:----:|:---------------:|:-------:|
57
- | 2.9091 | 0.0023 | 5 | 2.4472 | 75.7419 |
58
- | 2.1091 | 0.0045 | 10 | 2.1441 | 83.1511 |
 
 
59
 
60
 
61
  ### Framework versions
62
 
63
  - Transformers 4.44.2
64
- - Pytorch 2.4.0+cu121
65
  - Datasets 3.0.0
66
  - Tokenizers 0.19.1
 
18
 
19
  This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 1.5431
22
+ - Wer: 92.5913
23
 
24
  ## Model description
25
 
 
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 1e-05
42
+ - train_batch_size: 4
43
+ - eval_batch_size: 4
44
  - seed: 42
45
  - gradient_accumulation_steps: 16
46
+ - total_train_batch_size: 64
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
  - lr_scheduler_warmup_steps: 1
50
+ - training_steps: 20
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Training results
54
 
55
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
56
+ |:-------------:|:------:|:----:|:---------------:|:--------:|
57
+ | 2.0572 | 0.0091 | 5 | 1.9552 | 79.3737 |
58
+ | 1.7179 | 0.0182 | 10 | 1.6960 | 112.0115 |
59
+ | 1.4811 | 0.0273 | 15 | 1.5831 | 91.7683 |
60
+ | 1.513 | 0.0364 | 20 | 1.5431 | 92.5913 |
61
 
62
 
63
  ### Framework versions
64
 
65
  - Transformers 4.44.2
66
+ - Pytorch 2.4.1+cu121
67
  - Datasets 3.0.0
68
  - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0e87ebba29242c5c057b6db4ee70efc7bfaf5a6c7cd8f798aa14db53b283f781
3
  size 3055544304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bc8d72bcbcf2b038467fc3159dab62ee94d831700700fcce8c0b3d3f78e944c
3
  size 3055544304
runs/Sep20_11-08-24_754b6630704e/events.out.tfevents.1726830518.754b6630704e.620.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3ff1a84ebad668f1ae219ed989b475da03e9caf73b24752ba8630e8a06c76cdd
3
- size 11190
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:784898ccdf8ed8d17127e6f8ce254c51b966cb69682e32b879919f8bf8299da3
3
+ size 11538