Edit model card

model

This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1818
  • Edit Distance: 13.598

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 18
  • eval_batch_size: 18
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 18

Training results

Training Loss Epoch Step Validation Loss Edit Distance
0.7351 1.0 500 0.2832 13.844
0.3224 2.0 1000 0.2401 13.85
0.2788 3.0 1500 0.2285 13.795
0.2595 4.0 2000 0.2179 13.805
0.2469 5.0 2500 0.2066 13.687
0.233 6.0 3000 0.1912 13.67
0.219 7.0 3500 0.1874 13.658
0.2135 8.0 4000 0.1895 13.65
0.2101 9.0 4500 0.1883 13.643
0.2074 10.0 5000 0.1836 13.643
0.2057 11.0 5500 0.1825 13.649
0.2042 12.0 6000 0.1834 13.614
0.2034 13.0 6500 0.1828 13.623
0.2017 14.0 7000 0.1820 13.653
0.2017 15.0 7500 0.1824 13.634
0.2004 16.0 8000 0.1822 13.641
0.2006 17.0 8500 0.1817 13.62
0.2005 18.0 9000 0.1818 13.598

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.1
Downloads last month
10
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for raf-dc/model

Base model

google-t5/t5-small
Finetuned
this model