parambharat commited on
Commit
767a940
·
1 Parent(s): 31e23b0

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -11
README.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
  model-index:
@@ -12,15 +11,15 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # whisper-tiny-ml
14
 
15
- This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - eval_loss: 0.8221
18
- - eval_wer: 114.1119
19
- - eval_runtime: 87.3821
20
- - eval_samples_per_second: 1.144
21
- - eval_steps_per_second: 0.08
22
- - epoch: 9.01
23
- - step: 1000
24
 
25
  ## Model description
26
 
@@ -47,8 +46,8 @@ The following hyperparameters were used during training:
47
  - total_train_batch_size: 64
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
- - lr_scheduler_warmup_steps: 1000
51
- - training_steps: 10000
52
  - mixed_precision_training: Native AMP
53
 
54
  ### Framework versions
 
1
  ---
 
2
  tags:
3
  - generated_from_trainer
4
  model-index:
 
11
 
12
  # whisper-tiny-ml
13
 
14
+ This model was trained from scratch on the None dataset.
15
  It achieves the following results on the evaluation set:
16
+ - eval_loss: 0.6238
17
+ - eval_wer: 99.8783
18
+ - eval_runtime: 93.8436
19
+ - eval_samples_per_second: 1.066
20
+ - eval_steps_per_second: 0.075
21
+ - epoch: 4.02
22
+ - step: 500
23
 
24
  ## Model description
25
 
 
46
  - total_train_batch_size: 64
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
+ - lr_scheduler_warmup_steps: 500
50
+ - training_steps: 5000
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Framework versions