crossdelenna
commited on
Commit
•
acd2a7b
1
Parent(s):
fb23782
Update README.md
Browse files
README.md
CHANGED
@@ -21,13 +21,13 @@ It achieves the following results on the evaluation set:
|
|
21 |
## Model description
|
22 |
Wav2vec2 Automatic speech recognition for Indian English accent using the language model.
|
23 |
## Intended uses & limitations
|
24 |
-
This model is intended for my personal use only. Intentionally, the data set has absolutely no speech variance. It is fine-tuned only on my own data and I am using it for live speech dictation with Pyaudio non-blocking streaming microphone data (https://gist.github.com/KenoLeon/13dfb803a21a08cf224b2e6df0feed80). Before inference, train further on your own data. The training data has a lot of quantitative finance-related
|
25 |
|
26 |
## Training and evaluation data
|
27 |
Facebook base large dataset further fine-tuned on thirty-two hours of personal recordings. It has a male voice with an Indian English accent. The recording is done on the omnidirectional microphone with a lot of background noise.
|
28 |
|
29 |
## Training procedure
|
30 |
-
I downloaded my Reddit and Twitter data and started recording
|
31 |
(Now the idea is to fine-tune every two-three months only on unrecognized words)
|
32 |
|
33 |
### Training hyperparameters
|
|
|
21 |
## Model description
|
22 |
Wav2vec2 Automatic speech recognition for Indian English accent using the language model.
|
23 |
## Intended uses & limitations
|
24 |
+
This model is intended for my personal use only. Intentionally, the data set has absolutely no speech variance. It is fine-tuned only on my own data and I am using it for live speech dictation with Pyaudio non-blocking streaming microphone data (https://gist.github.com/KenoLeon/13dfb803a21a08cf224b2e6df0feed80). Before inference, train further on your own data. The training data has a lot of quantitative finance-related jargon and a lot of urban slang. Note that it doesn't hash out F words, so NSFW.
|
25 |
|
26 |
## Training and evaluation data
|
27 |
Facebook base large dataset further fine-tuned on thirty-two hours of personal recordings. It has a male voice with an Indian English accent. The recording is done on the omnidirectional microphone with a lot of background noise.
|
28 |
|
29 |
## Training procedure
|
30 |
+
I downloaded my Reddit and Twitter data and started recording each clip not exceeding 13 seconds. When I got enough sample size of 6 hrs I fine-tuned the model with approximately 19% WER. Afterwards, I kept adding the data and kept fine-tuning it. It is now trained on thirty hours of data.
|
31 |
(Now the idea is to fine-tune every two-three months only on unrecognized words)
|
32 |
|
33 |
### Training hyperparameters
|