elsayedissa commited on
Commit
4696b0a
1 Parent(s): b595fed

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -9
README.md CHANGED
@@ -5,31 +5,29 @@ tags:
5
  metrics:
6
  - wer
7
  model-index:
8
- - name: whisper-large-v2-japanese-24h
9
  results: []
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
- # whisper-large-v2-japanese-24h
16
 
17
- This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 0.4200
20
  - Wer: 0.7449
21
 
22
  ## Model description
23
 
24
- More information needed
25
-
26
- ## Intended uses & limitations
27
-
28
- More information needed
29
 
30
  ## Training and evaluation data
31
 
32
- More information needed
 
 
33
 
34
  ## Training procedure
35
 
@@ -56,6 +54,37 @@ The following hyperparameters were used during training:
56
  | 0.0002 | 30.53 | 4000 | 0.4123 | 0.7443 |
57
  | 0.0002 | 38.17 | 5000 | 0.4200 | 0.7449 |
58
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
60
  ### Framework versions
61
 
 
5
  metrics:
6
  - wer
7
  model-index:
8
+ - name: whisper-large-v2-japanese-5k-steps
9
  results: []
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ # whisper-large-v2-japanese-5k-steps
16
 
17
+ This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Japanese CommonVoice dataset (v11)..
18
  It achieves the following results on the evaluation set:
19
  - Loss: 0.4200
20
  - Wer: 0.7449
21
 
22
  ## Model description
23
 
24
+ This model is finetuned for 5000 steps for research purposes which means that the transcriptions might not be that satisfactory for users.
 
 
 
 
25
 
26
  ## Training and evaluation data
27
 
28
+ Training Data: CommonVoice (v11) train split
29
+ Validation Data: CommonVoice (v11) Validation split
30
+ Test Data: CommonVoice (v11) Test split
31
 
32
  ## Training procedure
33
 
 
54
  | 0.0002 | 30.53 | 4000 | 0.4123 | 0.7443 |
55
  | 0.0002 | 38.17 | 5000 | 0.4200 | 0.7449 |
56
 
57
+ ### Transcription
58
+
59
+ ```python
60
+ from datasets import load_dataset, Audio
61
+ import torch
62
+ from transformers import WhisperProcessor, WhisperForConditionalGeneration
63
+
64
+ # device
65
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
66
+
67
+ # load the model
68
+ processor = WhisperProcessor.from_pretrained("clu-ling/whisper-large-v2-arabic-5k-steps")
69
+ model = WhisperForConditionalGeneration.from_pretrained("clu-ling/whisper-large-v2-arabic-5k-steps").to(device)
70
+ forced_decoder_ids = processor.get_decoder_prompt_ids(language="ja", task="transcribe")
71
+
72
+ # load the dataset
73
+ commonvoice_eval = load_dataset("mozilla-foundation/common_voice_11_0", "ja", split="validation", streaming=True)
74
+ commonvoice_eval = commonvoice_eval.cast_column("audio", Audio(sampling_rate=16000))
75
+ sample = next(iter(commonvoice_eval))["audio"]
76
+
77
+ # features and generate token ids
78
+ input_features = processor(sample["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
79
+ predicted_ids = model.generate(input_features.to(device), forced_decoder_ids=forced_decoder_ids)
80
+
81
+ # decode
82
+ transcription = processor.batch_decode(predicted_ids)
83
+ transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
84
+
85
+ print(transcription)
86
+
87
+ ```
88
 
89
  ### Framework versions
90