sharjeel103 commited on
Commit
ec3ffe1
1 Parent(s): ee6b5fc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -7
README.md CHANGED
@@ -25,6 +25,7 @@ model-index:
25
  - name: Wer
26
  type: wer
27
  value: 16.033947800693557
 
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -39,17 +40,19 @@ It achieves the following results on the evaluation set:
39
 
40
  ## Model description
41
 
42
- More information needed
 
43
 
44
- ## Intended uses & limitations
 
 
45
 
46
- More information needed
47
 
48
- ## Training and evaluation data
49
 
50
- More information needed
51
 
52
- ## Training procedure
53
 
54
  ### Training hyperparameters
55
 
@@ -77,4 +80,4 @@ The following hyperparameters were used during training:
77
  - Transformers 4.42.3
78
  - Pytorch 2.1.2
79
  - Datasets 2.20.0
80
- - Tokenizers 0.19.1
 
25
  - name: Wer
26
  type: wer
27
  value: 16.033947800693557
28
+ pipeline_tag: automatic-speech-recognition
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
40
 
41
  ## Model description
42
 
43
+ Whisper Tiny Urdu ASR Model
44
+ This Whisper Tiny model has been fine-tuned on the Common Voice 17 dataset, which includes over 55 hours of Urdu speech data. The model was trained twice to optimize its performance:
45
 
46
+ First Training: The model was trained on the training set and evaluated on the test set for 20 epochs.
47
+ Second Training: The model was retrained on the combined train and validation sets, with the test set used for validation, also for 20 epochs.
48
+ Despite being the smallest variant in its family, this model achieves state-of-the-art performance for Urdu ASR tasks. It is specifically designed for deployment on small devices, offering an excellent balance between efficiency and accuracy.
49
 
50
+ Intended Use:
51
 
 
52
 
53
+ ## Intended uses & limitations
54
 
55
+ This model is particularly suited for applications on edge devices with limited computational resources. Additionally, it can be converted to a FasterWhisper model using the CTranslate2 library, allowing for even faster inference on devices with lower processing power.
56
 
57
  ### Training hyperparameters
58
 
 
80
  - Transformers 4.42.3
81
  - Pytorch 2.1.2
82
  - Datasets 2.20.0
83
+ - Tokenizers 0.19.1