whisper-small-ca / README.md
librarian-bot's picture
Librarian Bot: Add base_model information to model
b5ab779
|
raw
history blame
3.33 kB
metadata
language:
  - ca
license: apache-2.0
tags:
  - whisper-event
  - generated_from_trainer
  - hf-asr-leaderboard
datasets:
  - mozilla-foundation/common_voice_11_0
metrics:
  - wer
base_model: openai/whisper-small
model-index:
  - name: Whisper Small Catalan
    results:
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: mozilla-foundation/common_voice_11_0 ca
          type: mozilla-foundation/common_voice_11_0
          args: 'config: ml, split: test'
        metrics:
          - type: wer
            value: 8.569471791798646
            name: Wer
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: google/fleurs ca
          type: google/fleurs
          args: 'config: ml, split: test'
        metrics:
          - type: wer
            value: 10.64
            name: Wer
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: projecte-aina/parlament_parla clean
          type: projecte-aina/parlament_parla
          args: 'config: ml, split: test'
        metrics:
          - type: wer
            value: 19
            name: Wer

Whisper Small Catalan

This is an automatic speech recognition model that also does punctuation and casing. This model is for research only, we do not recommend using this model on production environments. See our learnings when training these models.

This model is a fine-tuned version of openai/whisper-small on the mozilla-foundation/common_voice_11_0 ca dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1980
  • Wer: 8.5695

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 20000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.2128 0.1 2000 0.2644 13.0303
0.1361 1.1 4000 0.2300 10.9568
0.0658 2.1 6000 0.2376 11.2810
0.102 3.09 8000 0.2156 9.8730
0.0706 4.09 10000 0.2126 9.6179
0.0428 5.09 12000 0.2178 9.3405
0.0503 6.09 14000 0.2109 9.1356
0.0778 7.08 16000 0.2058 9.2001
0.0082 8.08 18000 0.2173 8.9941
0.0994 9.08 20000 0.1980 8.5695

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.10.0+cu102
  • Datasets 2.7.1
  • Tokenizers 0.13.2