Edit model card

t5-finetuned-amazon-english

This model is a fine-tuned version of t5-small on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set:

  • Loss: 3.1713
  • Rouge1: 19.1814
  • Rouge2: 9.8673
  • Rougel: 18.1982
  • Rougelsum: 18.2963

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
3.3583 1.0 771 3.2513 16.6865 9.0598 15.8299 15.8472
3.1022 2.0 1542 3.2147 16.8499 9.4849 16.1568 16.2437
3.0067 3.0 2313 3.1718 16.9516 8.762 16.104 16.2186
2.9482 4.0 3084 3.1854 18.9582 9.5416 18.0846 18.2938
2.8934 5.0 3855 3.1669 18.857 9.934 17.9027 18.0272
2.8389 6.0 4626 3.1782 18.6736 9.326 17.6943 17.8852
2.8174 7.0 5397 3.1709 18.4342 9.6936 17.5714 17.6516
2.8 8.0 6168 3.1713 19.1814 9.8673 18.1982 18.2963

Framework versions

  • Transformers 4.22.0
  • Pytorch 1.12.1+cu113
  • Datasets 2.4.0
  • Tokenizers 0.12.1
Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train kabilanp942/t5-finetuned-amazon-english

Evaluation results