Edit model card

text_shortening_model_v68

This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1011
  • Bert precision: 0.8904
  • Bert recall: 0.8919
  • Bert f1-score: 0.8906
  • Average word count: 6.7117
  • Max word count: 18
  • Min word count: 2
  • Average token count: 10.7497
  • % shortened texts with length > 12: 2.002

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss Bert precision Bert recall Bert f1-score Average word count Max word count Min word count Average token count % shortened texts with length > 12
2.3576 1.0 37 1.6358 0.847 0.8396 0.8424 6.3233 16 0 10.7958 6.3063
1.7631 2.0 74 1.4404 0.8739 0.8715 0.8721 6.5666 16 0 10.4695 2.3023
1.5884 3.0 111 1.3492 0.8807 0.8787 0.8791 6.6006 16 2 10.3904 2.002
1.5068 4.0 148 1.2953 0.8829 0.881 0.8813 6.6006 16 2 10.4194 1.7017
1.4361 5.0 185 1.2638 0.8847 0.8836 0.8836 6.6547 16 2 10.4965 1.3013
1.3844 6.0 222 1.2357 0.8851 0.8846 0.8843 6.6747 16 2 10.5105 1.9019
1.351 7.0 259 1.2146 0.8832 0.8858 0.8839 6.8649 16 2 10.7628 2.3023
1.2944 8.0 296 1.2008 0.8848 0.8867 0.8852 6.7728 15 2 10.7047 2.1021
1.2785 9.0 333 1.1889 0.8856 0.8872 0.8858 6.7538 16 2 10.6987 1.8018
1.2469 10.0 370 1.1774 0.8851 0.8868 0.8854 6.7247 15 2 10.6627 2.1021
1.2206 11.0 407 1.1674 0.886 0.8882 0.8865 6.7558 16 2 10.7477 1.9019
1.1955 12.0 444 1.1614 0.8851 0.8875 0.8858 6.7748 15 2 10.7848 1.9019
1.1707 13.0 481 1.1516 0.8854 0.8879 0.8861 6.7698 15 2 10.7908 2.002
1.1455 14.0 518 1.1470 0.8871 0.8882 0.8872 6.6817 17 1 10.6867 1.9019
1.1392 15.0 555 1.1384 0.8861 0.8889 0.887 6.7658 17 1 10.8008 1.8018
1.1212 16.0 592 1.1351 0.8876 0.8902 0.8883 6.7528 17 1 10.8078 2.002
1.0965 17.0 629 1.1316 0.8861 0.8893 0.8872 6.7918 17 1 10.8639 2.3023
1.1 18.0 666 1.1269 0.8869 0.8901 0.8879 6.8218 17 2 10.8809 2.2022
1.0679 19.0 703 1.1220 0.8867 0.8889 0.8873 6.7157 17 1 10.7658 1.5015
1.0708 20.0 740 1.1209 0.8865 0.8889 0.8872 6.7618 17 1 10.7898 1.8018
1.0444 21.0 777 1.1178 0.8872 0.8892 0.8877 6.7047 17 2 10.7598 1.8018
1.0347 22.0 814 1.1161 0.8882 0.8902 0.8887 6.7167 17 2 10.7568 1.6016
1.0212 23.0 851 1.1147 0.8883 0.89 0.8886 6.7017 17 2 10.7467 1.8018
1.0264 24.0 888 1.1113 0.8879 0.8899 0.8884 6.6987 17 2 10.7397 1.8018
1.0186 25.0 925 1.1099 0.8876 0.8893 0.8879 6.6997 17 2 10.7417 1.7017
1.0124 26.0 962 1.1102 0.8882 0.8903 0.8888 6.7277 17 2 10.7718 2.1021
1.0081 27.0 999 1.1082 0.8889 0.8901 0.889 6.6687 17 2 10.6877 1.7017
1.0107 28.0 1036 1.1044 0.8893 0.8906 0.8894 6.6567 17 2 10.6807 1.7017
0.9788 29.0 1073 1.1060 0.8891 0.8905 0.8893 6.6817 18 2 10.7137 2.002
0.9899 30.0 1110 1.1052 0.8894 0.8915 0.8899 6.7357 18 2 10.7598 2.2022
0.9736 31.0 1147 1.1050 0.8896 0.8915 0.8901 6.7027 18 2 10.7367 2.002
0.9779 32.0 1184 1.1051 0.8899 0.892 0.8905 6.7237 18 2 10.7618 2.1021
0.9704 33.0 1221 1.1033 0.89 0.8914 0.8902 6.6877 18 2 10.7117 1.8018
0.9711 34.0 1258 1.1021 0.8894 0.8912 0.8898 6.7027 18 2 10.7327 1.8018
0.9637 35.0 1295 1.1019 0.89 0.8913 0.8901 6.6907 18 2 10.7217 1.9019
0.9525 36.0 1332 1.1016 0.8901 0.8915 0.8903 6.6997 18 2 10.7177 1.9019
0.9668 37.0 1369 1.1009 0.8902 0.8918 0.8905 6.7127 18 2 10.7497 2.002
0.9704 38.0 1406 1.1013 0.8904 0.8921 0.8908 6.7187 18 2 10.7528 2.1021
0.9531 39.0 1443 1.1010 0.8904 0.8919 0.8906 6.7117 18 2 10.7497 2.002
0.958 40.0 1480 1.1011 0.8904 0.8919 0.8906 6.7117 18 2 10.7497 2.002

Framework versions

  • Transformers 4.33.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ldos/text_shortening_model_v68

Base model

google-t5/t5-small
Finetuned
(1503)
this model