Edit model card

Whisper-Tiny-german-HanNeurAI

This model is a fine-tuned version of openai/whisper-tiny on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5505
  • Wer: 31.4346

This model is part of my school project, it uses shuffled 100k rows of train dataset since the computation power is limited.

Additional information can be found in this github: HanCreation/Whisper-Tiny-German

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.4824 0.16 1000 0.6305 35.5019
0.4284 0.32 2000 0.5855 33.3615
0.4152 0.48 3000 0.5610 32.1068
0.4387 0.64 4000 0.5505 31.4346

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Downloads last month
10
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for LiquAId/whisper-tiny-german-HanNeurAI

Finetuned
(1216)
this model

Dataset used to train LiquAId/whisper-tiny-german-HanNeurAI

Collection including LiquAId/whisper-tiny-german-HanNeurAI

Evaluation results