Edit model card

flan-t5-base-samsum-farag

This model is a fine-tuned version of google/flan-t5-base on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3695
  • Rouge1: 47.4352
  • Rouge2: 23.613
  • Rougel: 39.8977
  • Rougelsum: 43.5852
  • Gen Len: 17.3529

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.4497 1.0 1842 1.3848 46.3358 22.5925 38.7161 42.6084 17.2918
1.3474 2.0 3684 1.3717 47.1291 23.2809 39.4633 43.3246 17.2735
1.2818 3.0 5526 1.3701 47.349 23.4894 39.7933 43.4507 17.2479
1.2285 4.0 7368 1.3695 47.4352 23.613 39.8977 43.5852 17.3529
1.196 5.0 9210 1.3735 47.3488 23.6475 39.6788 43.523 17.3138

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1
Downloads last month
4
Safetensors
Model size
248M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mohadfarag1/flan-t5-base-samsum-farag

Finetuned
(641)
this model

Dataset used to train mohadfarag1/flan-t5-base-samsum-farag

Evaluation results