metadata
language:
- boz
license: other
tags:
- automatic-speech-recognition
- sil-ai/bloom-speech
- generated_from_trainer
datasets:
- bloom_speech
model-index:
- name: wav2vec2-bloom-speech-boz
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Bloom Speech boz
type: sil-ai/bloom-speech
args: boz
metrics:
- name: Test WER
type: wer
value: 30.78
- name: Test CER
type: cer
value: 8.35
extra_gated_prompt: >-
One more step before getting this model.
This model is open access and available only for non-commercial use, with an
SIL International AI & NLP RAIL-M license further specifying rights and usage.
The SIL RAIL-M License specifies:
1. You can't use the model to deliberately produce nor share illegal or
harmful outputs or content. Particularly, you cannot use the model use with
the intent or effect of harming or enabling discrimination against Indigenous
People.
2. SIL claims no rights on outputs you generate for non-commercial use, you
are free to use them and are accountable for their use, which must not go
against the provisions set in the license
3. You may re-distribute the weights and use the model non-commercially
including as a service. If you do, please be aware you have to include the
same use restrictions as the ones in the license and share a copy of the SIL
International AI & NLP RAIL-M to all your users (please read the license
entirely and carefully). Please read the full license here:
https://huggingface.co/spaces/sil-ai/model-license
By clicking on "Access repository" below, you accept that your *contact
information* (email address and username) can be shared with the model authors
as well.
If you would like to ask about commercial uses of this model, please [email
us](mailto:sj@derivation.co).
extra_gated_fields:
I have read the License and agree with its terms: checkbox
wav2vec2-bloom-speech-boz
Model description
- Homepage: SIL AI
- Point of Contact: SIL AI email
- Source Data: Bloom Library
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the SIL-AI/bloom-speech - BOZ (Bozo, Tieyaxo) dataset. It achieves the following results on the evaluation set:
- Loss: 0.6363
- Wer: 0.3078
- Cer: 0.0835
Users should refer to the original model for tutorials on using a trained model for inference.
Intended uses & limitations
Users of this model must abide by the SIL RAIL-M License.
This model is created as a proof of concept and no guarantees are made regarding the performance of the model is specific situations.
Training and evaluation data
Training, Validation, and Test datasets were generated from the same corpus, ensuring that no duplicate files were used.
Training procedure
Standard finetuning of XLS-R was used based on the examples in the Hugging Face Transformers Github
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 1000.0
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
---|---|---|---|---|---|
No log | 19.22 | 250 | 0.5364 | 0.4952 | 0.1337 |
2.125 | 38.44 | 500 | 0.5503 | 0.3817 | 0.1015 |
2.125 | 57.67 | 750 | 0.6261 | 0.3557 | 0.0958 |
0.0839 | 76.89 | 1000 | 0.6022 | 0.3502 | 0.0931 |
0.0839 | 96.15 | 1250 | 0.6244 | 0.3324 | 0.0895 |
0.0438 | 115.37 | 1500 | 0.6471 | 0.3242 | 0.0847 |
0.0438 | 134.59 | 1750 | 0.6664 | 0.3420 | 0.0931 |
0.03 | 153.81 | 2000 | 0.6629 | 0.3133 | 0.0859 |
0.03 | 173.07 | 2250 | 0.6339 | 0.3105 | 0.0832 |
0.0247 | 192.3 | 2500 | 0.6363 | 0.3078 | 0.0835 |
0.0247 | 211.52 | 2750 | 0.6655 | 0.2996 | 0.0808 |
0.0191 | 230.74 | 3000 | 0.6656 | 0.3256 | 0.0880 |
0.0191 | 249.96 | 3250 | 0.7000 | 0.3092 | 0.0856 |
0.0152 | 269.22 | 3500 | 0.7301 | 0.3119 | 0.0871 |
Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.2.2
- Tokenizers 0.12.1