speecht5_finetuned_Mar

This model is a fine-tuned version of microsoft/speecht5_tts on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4785

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 8
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • training_steps: 3000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.6818 0.7976 100 0.6565
0.6107 1.5952 200 0.5797
0.6033 2.3928 300 0.5716
0.5893 3.1904 400 0.5686
0.5919 3.9880 500 0.5624
0.5732 4.7856 600 0.5932
0.5677 5.5833 700 0.5485
0.5521 6.3809 800 0.5407
0.5513 7.1785 900 0.5348
0.5488 7.9761 1000 0.5366
0.5344 8.7737 1100 0.5267
0.5329 9.5713 1200 0.5163
0.5118 10.3689 1300 0.5172
0.5126 11.1665 1400 0.5089
0.5256 11.9641 1500 0.5104
0.5126 12.7617 1600 0.5054
0.5034 13.5593 1700 0.5023
0.4986 14.3569 1800 0.4974
0.4926 15.1545 1900 0.5081
0.4961 15.9521 2000 0.4929
0.4886 16.7498 2100 0.4863
0.491 17.5474 2200 0.4925
0.4845 18.3450 2300 0.4859
0.4726 19.1426 2400 0.4835
0.4786 19.9402 2500 0.4829
0.4732 20.7378 2600 0.4800
0.4713 21.5354 2700 0.4737
0.4571 22.3330 2800 0.4789
0.4559 23.1306 2900 0.4771
0.4552 23.9282 3000 0.4785

Framework versions

  • Transformers 4.46.3
  • Pytorch 2.5.1+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
34
Safetensors
Model size
144M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for maghrane/speecht5_finetuned_Mar

Finetuned
(839)
this model