whisper-small-zh-tw / README.md
librarian-bot's picture
Librarian Bot: Add base_model information to model
bb6d94b
|
raw
history blame
2.2 kB
metadata
language:
  - zh
license: apache-2.0
tags:
  - whisper-event
  - generated_from_trainer
datasets:
  - mozilla-foundation/common_voice_11_0
metrics:
  - wer
base_model: kimbochen/whisper-small-zh-tw
model-index:
  - name: Whisper Small Traditional Chinese
    results:
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: mozilla-foundation/common_voice_11_0 zh-TW
          type: mozilla-foundation/common_voice_11_0
          config: zh-TW
          split: test
          args: zh-TW
        metrics:
          - type: wer
            value: 32.04202832343535
            name: Wer

Whisper Small Traditional Chinese

This model is a fine-tuned version of kimbochen/whisper-small-zh-tw on the mozilla-foundation/common_voice_11_0 zh-TW dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4334
  • Wer: 32.0420

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1200
  • training_steps: 2000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0066 2.05 400 0.3743 32.9100
0.0084 5.03 800 0.3787 33.4171
0.0098 8.01 1200 0.3979 33.2481
0.0019 10.06 1600 0.4084 32.3116
0.0008 13.04 2000 0.4334 32.0420

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.13.0+cu116
  • Datasets 2.7.1.dev0
  • Tokenizers 0.13.2