t5-small-finetuned-dialogsum-v2
This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.3041
- Rouge1: 35.9525
- Rouge2: 13.1826
- Rougel: 30.3535
- Rougelsum: 32.2144
- Gen Len: 18.902
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
1.7356 | 1.0 | 779 | 1.4283 | 33.2097 | 10.6868 | 27.8451 | 29.9371 | 18.854 |
1.5042 | 2.0 | 1558 | 1.3706 | 34.3543 | 11.7561 | 28.8686 | 31.0041 | 18.842 |
1.4725 | 3.0 | 2337 | 1.3471 | 34.5334 | 11.9629 | 29.1625 | 31.1241 | 18.88 |
1.4329 | 4.0 | 3116 | 1.3299 | 35.31 | 12.6214 | 29.7381 | 31.7618 | 18.918 |
1.424 | 5.0 | 3895 | 1.3153 | 35.5141 | 13.2169 | 30.3033 | 32.0904 | 18.928 |
1.4044 | 6.0 | 4674 | 1.3090 | 35.7821 | 12.9692 | 30.3978 | 32.1945 | 18.912 |
1.3984 | 7.0 | 5453 | 1.3050 | 35.9485 | 13.3086 | 30.3416 | 32.2398 | 18.906 |
1.3908 | 8.0 | 6232 | 1.3041 | 35.9525 | 13.1826 | 30.3535 | 32.2144 | 18.902 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
- Downloads last month
- 3
Inference API (serverless) is not available, repository is disabled.
Model tree for saileshamandola/t5-small-finetuned-dialogsum-v2
Base model
google-t5/t5-small
Finetuned
this model