Edit model card

speecht5_finetuned_voice_dataset_bn_v_3

This model is a fine-tuned version of microsoft/speecht5_tts on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5008

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 125
  • training_steps: 3000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.6046 12.2699 250 0.5646
0.5583 24.5399 500 0.5268
0.5364 36.8098 750 0.5188
0.5171 49.0798 1000 0.5087
0.5098 61.3497 1250 0.5018
0.501 73.6196 1500 0.5022
0.4984 85.8896 1750 0.4955
0.4929 98.1595 2000 0.5000
0.4933 110.4294 2250 0.4944
0.4868 122.6994 2500 0.5006
0.4805 134.9693 2750 0.4991
0.4802 147.2393 3000 0.5008

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.19.1
Downloads last month
12
Safetensors
Model size
144M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tdnathmlenthusiast/speecht5_finetuned_voice_dataset_bn_v_3

Finetuned
(791)
this model