Edit model card

text_summarization-finetuned-stocknews

This model is a fine-tuned version of Falconsai/text_summarization on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5087
  • Rouge1: 28.1323
  • Rouge2: 14.1505
  • Rougel: 23.7163
  • Rougelsum: 24.743
  • Gen Len: 19.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 25 1.8901 26.1517 11.6615 21.4583 22.9556 19.0
No log 2.0 50 1.7909 25.9481 11.4621 21.1748 22.8127 19.0
No log 3.0 75 1.7388 26.412 12.1797 21.744 23.3289 19.0
No log 4.0 100 1.6988 26.4465 12.2417 21.7109 23.2402 19.0
No log 5.0 125 1.6752 26.6441 12.4313 21.7396 23.2725 19.0
No log 6.0 150 1.6531 26.4585 12.2979 21.7528 23.1338 19.0
No log 7.0 175 1.6386 26.6186 12.4271 21.8074 23.2756 19.0
No log 8.0 200 1.6263 26.4223 12.3512 21.7575 23.3278 19.0
No log 9.0 225 1.6124 26.5846 12.49 21.9218 23.433 19.0
No log 10.0 250 1.6035 26.8364 12.6954 22.2409 23.6239 19.0
No log 11.0 275 1.5926 27.0986 12.7881 22.2246 23.6203 19.0
No log 12.0 300 1.5844 27.4875 13.1342 22.717 24.0836 19.0
No log 13.0 325 1.5757 27.6863 13.2919 22.8203 24.1659 19.0
No log 14.0 350 1.5688 27.69 13.295 22.8364 24.2587 19.0
No log 15.0 375 1.5643 27.7651 13.5588 23.01 24.5047 19.0
No log 16.0 400 1.5586 27.8662 13.8812 23.1299 24.5692 19.0
No log 17.0 425 1.5525 27.5329 13.5729 22.8646 24.2491 19.0
No log 18.0 450 1.5466 27.2864 13.6465 22.754 24.0451 19.0
No log 19.0 475 1.5434 27.3062 13.664 22.7509 24.015 19.0
1.7497 20.0 500 1.5401 27.3177 13.8162 22.8012 24.0359 19.0
1.7497 21.0 525 1.5369 27.4956 13.9869 23.0248 24.2922 19.0
1.7497 22.0 550 1.5345 27.4794 13.7914 23.0306 24.2942 19.0
1.7497 23.0 575 1.5324 27.4794 13.7914 23.0306 24.2942 19.0
1.7497 24.0 600 1.5302 27.529 13.8756 23.1045 24.3861 19.0
1.7497 25.0 625 1.5266 27.8738 14.0877 23.4826 24.7471 19.0
1.7497 26.0 650 1.5252 27.9294 13.9793 23.4775 24.669 19.0
1.7497 27.0 675 1.5247 28.0046 14.0835 23.4865 24.7035 19.0
1.7497 28.0 700 1.5239 28.0085 14.1428 23.6155 24.8178 19.0
1.7497 29.0 725 1.5224 27.9738 14.1251 23.6146 24.7919 19.0
1.7497 30.0 750 1.5200 28.007 14.1042 23.653 24.7639 19.0
1.7497 31.0 775 1.5192 27.9376 14.0443 23.5673 24.6209 19.0
1.7497 32.0 800 1.5177 28.0251 14.0888 23.6316 24.6779 19.0
1.7497 33.0 825 1.5165 28.0519 14.0867 23.6242 24.6728 19.0
1.7497 34.0 850 1.5164 28.1185 14.1615 23.6657 24.7177 19.0
1.7497 35.0 875 1.5146 28.0809 14.1228 23.6657 24.7177 19.0
1.7497 36.0 900 1.5134 28.1107 14.1889 23.6946 24.7532 19.0
1.7497 37.0 925 1.5130 28.0476 14.0937 23.6232 24.6671 19.0
1.7497 38.0 950 1.5123 27.9979 14.0209 23.5935 24.6298 19.0
1.7497 39.0 975 1.5114 28.001 14.1042 23.6265 24.6735 19.0
1.5033 40.0 1000 1.5100 28.004 14.1355 23.6552 24.6776 19.0
1.5033 41.0 1025 1.5100 28.0346 14.1432 23.6432 24.7052 19.0
1.5033 42.0 1050 1.5098 28.052 14.1387 23.6401 24.6953 19.0
1.5033 43.0 1075 1.5098 28.1032 14.1743 23.6401 24.6953 19.0
1.5033 44.0 1100 1.5096 28.129 14.1847 23.7406 24.805 19.0
1.5033 45.0 1125 1.5093 28.1763 14.2264 23.7075 24.783 19.0
1.5033 46.0 1150 1.5090 28.1336 14.1871 23.7075 24.783 19.0
1.5033 47.0 1175 1.5089 28.1336 14.1871 23.7075 24.783 19.0
1.5033 48.0 1200 1.5088 28.1336 14.1871 23.7075 24.783 19.0
1.5033 49.0 1225 1.5087 28.129 14.1847 23.7406 24.805 19.0
1.5033 50.0 1250 1.5087 28.1323 14.1505 23.7163 24.743 19.0

Framework versions

  • Transformers 4.38.1
  • Pytorch 2.1.2
  • Datasets 2.1.0
  • Tokenizers 0.15.2
Downloads last month
8
Safetensors
Model size
60.5M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sujayC66/text_summarization-finetuned-stocknews

Finetuned
(19)
this model