flan-t5-base-samsum5

This model is a fine-tuned version of google/flan-t5-base on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3676
  • Rouge1: 46.8382
  • Rouge2: 23.107
  • Rougel: 39.5293
  • Rougelsum: 42.8917
  • Gen Len: 17.3675

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 6
  • eval_batch_size: 6
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.4315 1.0 2456 1.3793 46.7908 22.8618 39.3679 43.0337 17.1148
1.352 2.0 4912 1.3676 46.8382 23.107 39.5293 42.8917 17.3675
1.2638 3.0 7368 1.3684 47.3491 23.5078 39.9401 43.4065 17.1832
1.2238 4.0 9824 1.3727 47.2949 23.8671 40.057 43.645 17.3260
1.1632 5.0 12280 1.3737 47.3124 23.7118 40.0067 43.6227 17.3126

Framework versions

  • Transformers 4.27.1
  • Pytorch 1.13.1+cu116
  • Datasets 2.10.1
  • Tokenizers 0.13.2
Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train rajistics/flan-t5-base-samsum5

Evaluation results