Edit model card

en_bn_summarize_v7

This model is a fine-tuned version of csebuetnlp/mT5_m2m_crossSum on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.8058
  • Rouge-1: 18.1261
  • Rouge-2: 6.4386
  • Rouge-l: 15.755
  • Gen Len: 43.3354

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge-1 Rouge-2 Rouge-l Gen Len
1.8582 1.0 154 1.8089 17.2361 6.3031 15.1651 42.4348
1.6492 2.0 308 1.7993 16.9045 6.083 14.6343 41.472
1.6278 3.0 462 1.8006 16.909 6.1661 14.6043 43.4969
1.5656 4.0 616 1.8016 17.1664 6.3668 15.0702 42.1925
1.5456 5.0 770 1.7983 16.8696 5.9485 14.729 42.2298
1.5146 6.0 924 1.8060 17.2806 5.98 14.7861 43.3602
1.4575 7.0 1078 1.8024 17.6126 6.1446 15.1649 43.3665
1.4988 8.0 1232 1.8046 17.619 6.1422 15.1738 43.3913
1.4637 9.0 1386 1.8059 17.6713 6.2475 15.3152 44.0621
1.4593 10.0 1540 1.8058 18.1261 6.4386 15.755 43.3354

Framework versions

  • Transformers 4.33.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mHossain/en_bn_summarize_v7

Finetuned
(3)
this model