ArabicTTS / README.md
CarmelaFinianos's picture
End of training
08060ce verified
|
raw
history blame
1.86 kB
metadata
library_name: transformers
license: mit
base_model: MBZUAI/speecht5_tts_clartts_ar
tags:
  - generated_from_trainer
model-index:
  - name: Arabictts
    results: []

Arabictts

This model is a fine-tuned version of MBZUAI/speecht5_tts_clartts_ar on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5791

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 700
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.6455 6.9565 100 0.6248
0.6067 13.9130 200 0.5949
0.5847 20.8696 300 0.5851
0.5713 27.8261 400 0.5806
0.5597 34.7826 500 0.5779
0.5525 41.7391 600 0.5832
0.5511 48.6957 700 0.5791

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.5.1+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3