Abhinav28 commited on
Commit
585eb19
1 Parent(s): c0a4842

End of training

Browse files
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - hi
4
+ license: apache-2.0
5
+ library_name: peft
6
+ tags:
7
+ - hf-asr-leaderboard
8
+ - generated_from_trainer
9
+ datasets:
10
+ - mozilla-foundation/common_voice_11_0
11
+ base_model: openai/whisper-large-v3
12
+ model-index:
13
+ - name: Abhinav28/whisper-large-v3-hindi-100steps
14
+ results: []
15
+ ---
16
+
17
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
+ should probably proofread and complete it, then remove this comment. -->
19
+
20
+ # Abhinav28/whisper-large-v3-hindi-100steps
21
+
22
+ This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Common Voice 11.0 dataset.
23
+ It achieves the following results on the evaluation set:
24
+ - Loss: 0.3031
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 0.001
44
+ - train_batch_size: 8
45
+ - eval_batch_size: 8
46
+ - seed: 42
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: linear
49
+ - lr_scheduler_warmup_steps: 50
50
+ - training_steps: 100
51
+ - mixed_precision_training: Native AMP
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss |
56
+ |:-------------:|:-----:|:----:|:---------------:|
57
+ | No log | 0.77 | 10 | 0.7025 |
58
+ | No log | 1.54 | 20 | 0.4690 |
59
+ | No log | 2.31 | 30 | 0.3444 |
60
+ | No log | 3.08 | 40 | 0.2747 |
61
+ | No log | 3.85 | 50 | 0.3133 |
62
+ | No log | 4.62 | 60 | 0.3181 |
63
+ | No log | 5.38 | 70 | 0.3010 |
64
+ | No log | 6.15 | 80 | 0.2810 |
65
+ | No log | 6.92 | 90 | 0.3127 |
66
+ | 0.2416 | 7.69 | 100 | 0.3031 |
67
+
68
+
69
+ ### Framework versions
70
+
71
+ - PEFT 0.7.2.dev0
72
+ - Transformers 4.36.2
73
+ - Pytorch 2.0.0
74
+ - Datasets 2.16.1
75
+ - Tokenizers 0.15.0
adapter_config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": {
4
+ "base_model_class": "WhisperForConditionalGeneration",
5
+ "parent_library": "transformers.models.whisper.modeling_whisper"
6
+ },
7
+ "base_model_name_or_path": "openai/whisper-large-v3",
8
+ "bias": "none",
9
+ "fan_in_fan_out": false,
10
+ "inference_mode": true,
11
+ "init_lora_weights": true,
12
+ "layers_pattern": null,
13
+ "layers_to_transform": null,
14
+ "loftq_config": {},
15
+ "lora_alpha": 64,
16
+ "lora_dropout": 0.05,
17
+ "megatron_config": null,
18
+ "megatron_core": "megatron.core",
19
+ "modules_to_save": null,
20
+ "peft_type": "LORA",
21
+ "r": 32,
22
+ "rank_pattern": {},
23
+ "revision": null,
24
+ "target_modules": [
25
+ "v_proj",
26
+ "q_proj"
27
+ ],
28
+ "task_type": null,
29
+ "use_rslora": false
30
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab58387c894698c3c0f2b86f22d7260d171a12d5f1ab96513503965abc4e48e3
3
+ size 62969640
preprocessor_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "chunk_length": 30,
3
+ "feature_extractor_type": "WhisperFeatureExtractor",
4
+ "feature_size": 128,
5
+ "hop_length": 160,
6
+ "n_fft": 400,
7
+ "n_samples": 480000,
8
+ "nb_max_frames": 3000,
9
+ "padding_side": "right",
10
+ "padding_value": 0.0,
11
+ "processor_class": "WhisperProcessor",
12
+ "return_attention_mask": false,
13
+ "sampling_rate": 16000
14
+ }
runs/Jan05_13-32-01_jupyter-abhinav-2eswami-40telusinternational-2ecom/events.out.tfevents.1704461534.jupyter-abhinav-2eswami-40telusinternational-2ecom.899.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:903c14e665dc1114282283647f16373bc4b2ccb7ac4929a7c783c61e4e42105d
3
+ size 5612
runs/Jan05_13-32-01_jupyter-abhinav-2eswami-40telusinternational-2ecom/events.out.tfevents.1704461666.jupyter-abhinav-2eswami-40telusinternational-2ecom.899.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db93309d09bec11eeeb487570d26830e349f25d7e198c5c5789b934985128200
3
+ size 8006
runs/Jan05_13-47-33_jupyter-abhinav-2eswami-40telusinternational-2ecom/events.out.tfevents.1704462453.jupyter-abhinav-2eswami-40telusinternational-2ecom.11062.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8708badf3288c2dba56f9a2a10e7132a2dcd485a5681853837b66bf55ad4beb
3
+ size 8774
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29e48557e772095dad7172dbc2252c8cf432e0ad0e6d9e70b065fe2fe2aae362
3
+ size 4475