spktsagar commited on
Commit
81f8c54
1 Parent(s): 52e58e5

add readme description

Browse files
Files changed (1) hide show
  1. README.md +46 -8
README.md CHANGED
@@ -1,10 +1,35 @@
1
  ---
 
 
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
 
 
 
 
 
 
 
 
5
  model-index:
6
  - name: wav2vec2-large-xls-r-300m-nepali-openslr
7
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -12,7 +37,7 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # wav2vec2-large-xls-r-300m-nepali-openslr
14
 
15
- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
  - eval_loss: 0.1767
18
  - eval_wer: 0.2127
@@ -24,17 +49,30 @@ It achieves the following results on the evaluation set:
24
 
25
  ## Model description
26
 
27
- More information needed
28
 
29
- ## Intended uses & limitations
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
- More information needed
32
 
33
- ## Training and evaluation data
34
 
35
- More information needed
36
 
37
- ## Training procedure
38
 
39
  ### Training hyperparameters
40
 
 
1
  ---
2
+ language:
3
+ - ne
4
  license: apache-2.0
5
  tags:
6
  - generated_from_trainer
7
+ - automatic-speech-recognition
8
+ - speech
9
+ - openslr
10
+ - nepali
11
+ datasets:
12
+ - spktsagar/openslr-nepali-asr-cleaned
13
+ metrics:
14
+ - wer
15
  model-index:
16
  - name: wav2vec2-large-xls-r-300m-nepali-openslr
17
+ results:
18
+ - task:
19
+ type: automatic-speech-recognition
20
+ name: Nepali Speech Recognition
21
+ dataset:
22
+ type: spktsagar/openslr-nepali-asr-cleaned
23
+ name: OpenSLR Nepali ASR
24
+ config: original
25
+ split: train
26
+ metrics:
27
+ - type: were
28
+ value: 21.27
29
+ name: Test WER
30
+ verified: false
31
+
32
+
33
  ---
34
 
35
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
37
 
38
  # wav2vec2-large-xls-r-300m-nepali-openslr
39
 
40
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an [OpenSLR Nepali ASR](https://huggingface.co/datasets/spktsagar/openslr-nepali-asr-cleaned) dataset.
41
  It achieves the following results on the evaluation set:
42
  - eval_loss: 0.1767
43
  - eval_wer: 0.2127
 
49
 
50
  ## Model description
51
 
52
+ Wav2Vec2 is a pretrained model for Automatic Speech Recognition (ASR) and was released in September 2020 by Alexei Baevski, Michael Auli, and Alex Conneau. Soon after the superior performance of Wav2Vec2 was demonstrated on one of the most popular English datasets for ASR, called LibriSpeech, Facebook AI presented a multi-lingual version of Wav2Vec2, called XLSR. XLSR stands for cross-lingual speech representations and refers to model's ability to learn speech representations that are useful across multiple languages.
53
 
54
+ ## How to use?
55
+ 1. Install transformers and librosa
56
+ ```
57
+ pip install librosa, transformers
58
+ ```
59
+ 2. Run the following code which loads your audio file, preprocessor, models, and returns your prediction
60
+ ```python
61
+ import librosa
62
+ from transformers import pipeline
63
+
64
+ audio, sample_rate = librosa.load("<path to your audio file>", sr=16000)
65
+ recognizer = pipeline("automatic-speech-recognition", model="spktsagar/wav2vec2-large-xls-r-300m-nepali-openslr")
66
+ prediction = recognizer(audio)
67
+ ```
68
 
69
+ ## Intended uses & limitations
70
 
71
+ The model is trained on the OpenSLR Nepali ASR dataset, which in itself has some incorrect transcriptions, so it is obvious that the model will not have perfect predictions for your transcript. Similarly, due to colab's resource limit utterances longer than 5 sec are filtered out from the dataset during training and evaluation. Hence, the model might not perform as expected when given audio input longer than 5 sec.
72
 
73
+ ## Training and evaluation data and Training procedure
74
 
75
+ For dataset preparation and training code, please consult [my blog](https://sagar-spkt.github.io/posts/2022/08/finetune-xlsr-nepali/).
76
 
77
  ### Training hyperparameters
78