ibrahimj commited on
Commit
a8efd38
1 Parent(s): 60a0391

End of training

Browse files
README.md CHANGED
@@ -2,12 +2,25 @@
2
  license: apache-2.0
3
  base_model: openai/whisper-tiny
4
  tags:
 
5
  - generated_from_trainer
 
 
6
  metrics:
7
  - wer
8
  model-index:
9
  - name: hamsa-tiny-finetuned-qasr
10
- results: []
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -15,7 +28,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # hamsa-tiny-finetuned-qasr
17
 
18
- This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
  - Loss: 0.3310
21
  - Wer: 25.4515
 
2
  license: apache-2.0
3
  base_model: openai/whisper-tiny
4
  tags:
5
+ - whisper-event
6
  - generated_from_trainer
7
+ datasets:
8
+ - nadsoft/QASR-Speech-Resource
9
  metrics:
10
  - wer
11
  model-index:
12
  - name: hamsa-tiny-finetuned-qasr
13
+ results:
14
+ - task:
15
+ name: Automatic Speech Recognition
16
+ type: automatic-speech-recognition
17
+ dataset:
18
+ name: nadsoft/QASR-Speech-Resource default
19
+ type: nadsoft/QASR-Speech-Resource
20
+ metrics:
21
+ - name: Wer
22
+ type: wer
23
+ value: 25.45148200004746
24
  ---
25
 
26
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
28
 
29
  # hamsa-tiny-finetuned-qasr
30
 
31
+ This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the nadsoft/QASR-Speech-Resource default dataset.
32
  It achieves the following results on the evaluation set:
33
  - Loss: 0.3310
34
  - Wer: 25.4515
all_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 6.01,
3
+ "eval_loss": 0.3309793174266815,
4
+ "eval_runtime": 259.1043,
5
+ "eval_samples_per_second": 19.317,
6
+ "eval_steps_per_second": 2.416,
7
+ "eval_wer": 25.45148200004746,
8
+ "train_loss": 0.3415145711453756,
9
+ "train_runtime": 260660.25,
10
+ "train_samples_per_second": 36.83,
11
+ "train_steps_per_second": 0.575
12
+ }
eval_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 6.01,
3
+ "eval_loss": 0.3309793174266815,
4
+ "eval_runtime": 259.1043,
5
+ "eval_samples_per_second": 19.317,
6
+ "eval_steps_per_second": 2.416,
7
+ "eval_wer": 25.45148200004746
8
+ }
runs/Feb12_19-50-30_ip-10-0-3-5.eu-west-1.compute.internal/events.out.tfevents.1708041798.ip-10-0-3-5.eu-west-1.compute.internal.25402.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:791e58486c88aa8c10cfa5d0ad52721a693f37e4caf3b7fb862ff4234aecf3c7
3
+ size 412
train_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 6.01,
3
+ "train_loss": 0.3415145711453756,
4
+ "train_runtime": 260660.25,
5
+ "train_samples_per_second": 36.83,
6
+ "train_steps_per_second": 0.575
7
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff