Edit model card

denoice-finetuned-xsum

This model is a fine-tuned version of google/t5-efficient-tiny on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0178
  • Rouge1: 95.5056
  • Rouge2: 72.8464
  • Rougel: 95.3933
  • Rougelsum: 95.5056
  • Gen Len: 5.1517

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 500
  • eval_batch_size: 500
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 70

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 36 0.0219 94.8315 72.6592 94.8315 94.8876 5.1348
No log 2.0 72 0.0218 94.8315 72.6592 94.8315 94.8876 5.1348
No log 3.0 108 0.0215 94.8315 72.6592 94.8315 94.8876 5.1348
No log 4.0 144 0.0215 95.0562 72.6592 95.0562 95.0562 5.1573
No log 5.0 180 0.0214 95.0562 72.6592 95.0562 95.0562 5.1517
No log 6.0 216 0.0212 94.8315 72.6592 94.8315 94.8876 5.1348
No log 7.0 252 0.0209 94.6067 72.6592 94.6067 94.6067 5.1292
No log 8.0 288 0.0210 94.8876 72.0974 94.7753 94.8315 5.1236
No log 9.0 324 0.0208 94.8876 72.0974 94.7753 94.8315 5.1236
No log 10.0 360 0.0210 95.1124 72.0974 95.0 95.1124 5.1404
No log 11.0 396 0.0207 95.6742 72.6592 95.618 95.6742 5.1573
No log 12.0 432 0.0207 95.1124 72.0974 95.0 95.1124 5.1461
No log 13.0 468 0.0206 95.1124 72.0974 95.0 95.1124 5.1404
0.0349 14.0 504 0.0203 95.3933 72.6592 95.2809 95.3933 5.1461
0.0349 15.0 540 0.0202 95.3933 72.6592 95.2809 95.3933 5.1461
0.0349 16.0 576 0.0201 95.1124 72.0974 95.0 95.1124 5.1404
0.0349 17.0 612 0.0201 95.1124 72.0974 95.0 95.1124 5.1404
0.0349 18.0 648 0.0196 95.1124 72.0974 95.0 95.1124 5.1404
0.0349 19.0 684 0.0194 95.2247 72.2846 95.1124 95.2247 5.1629
0.0349 20.0 720 0.0192 95.1124 72.0974 95.0 95.1124 5.1404
0.0349 21.0 756 0.0192 95.3933 72.6592 95.2809 95.3933 5.1461
0.0349 22.0 792 0.0193 95.3933 72.6592 95.2809 95.3933 5.1461
0.0349 23.0 828 0.0193 95.5056 72.8464 95.3933 95.5056 5.1517
0.0349 24.0 864 0.0194 95.3933 72.6592 95.2809 95.3933 5.1461
0.0349 25.0 900 0.0193 95.3933 72.6592 95.2809 95.3933 5.1461
0.0349 26.0 936 0.0194 95.3933 72.6592 95.2809 95.3933 5.1461
0.0349 27.0 972 0.0193 95.3933 72.6592 95.2809 95.3933 5.1461
0.0315 28.0 1008 0.0192 95.3933 72.6592 95.2809 95.3933 5.1461
0.0315 29.0 1044 0.0190 95.3933 72.6592 95.2809 95.3933 5.1461
0.0315 30.0 1080 0.0191 95.3933 72.6592 95.2809 95.3933 5.1461
0.0315 31.0 1116 0.0190 95.3933 72.6592 95.2809 95.3933 5.1461
0.0315 32.0 1152 0.0191 95.3933 72.6592 95.2809 95.3933 5.1461
0.0315 33.0 1188 0.0190 95.1124 72.0974 95.0 95.1124 5.1404
0.0315 34.0 1224 0.0190 95.1124 72.0974 95.0 95.1124 5.1404
0.0315 35.0 1260 0.0188 95.1124 72.0974 95.0 95.1124 5.1404
0.0315 36.0 1296 0.0187 95.1124 72.0974 95.0 95.1124 5.1404
0.0315 37.0 1332 0.0186 95.1124 72.0974 95.0 95.1124 5.1404
0.0315 38.0 1368 0.0186 95.2247 72.2846 95.1124 95.2247 5.1461
0.0315 39.0 1404 0.0186 95.5056 72.8464 95.3933 95.5056 5.1517
0.0315 40.0 1440 0.0186 95.3933 72.6592 95.2809 95.3933 5.1461
0.0315 41.0 1476 0.0185 95.5056 72.8464 95.3933 95.5056 5.1517
0.0291 42.0 1512 0.0184 95.5056 72.8464 95.3933 95.5056 5.1517
0.0291 43.0 1548 0.0184 95.5056 72.8464 95.3933 95.5056 5.1517
0.0291 44.0 1584 0.0184 95.5056 72.8464 95.3933 95.5056 5.1517
0.0291 45.0 1620 0.0183 95.618 73.0337 95.5056 95.618 5.1742
0.0291 46.0 1656 0.0182 95.5056 72.8464 95.3933 95.5056 5.1517
0.0291 47.0 1692 0.0182 95.618 73.0337 95.5056 95.618 5.1742
0.0291 48.0 1728 0.0182 95.618 73.0337 95.5056 95.618 5.1742
0.0291 49.0 1764 0.0182 95.618 73.0337 95.5056 95.618 5.1742
0.0291 50.0 1800 0.0182 95.618 73.0337 95.5056 95.618 5.1742
0.0291 51.0 1836 0.0182 95.618 73.0337 95.5056 95.618 5.1742
0.0291 52.0 1872 0.0181 95.618 73.0337 95.5056 95.618 5.1742
0.0291 53.0 1908 0.0179 95.2247 72.2846 95.1124 95.2247 5.1461
0.0291 54.0 1944 0.0179 95.5056 72.8464 95.3933 95.5056 5.1517
0.0291 55.0 1980 0.0179 95.618 73.0337 95.5056 95.618 5.1742
0.0279 56.0 2016 0.0178 95.5056 72.8464 95.3933 95.5056 5.1517
0.0279 57.0 2052 0.0178 95.5056 72.8464 95.3933 95.5056 5.1517
0.0279 58.0 2088 0.0177 95.5056 72.8464 95.3933 95.5056 5.1517
0.0279 59.0 2124 0.0178 95.5056 72.8464 95.3933 95.5056 5.1517
0.0279 60.0 2160 0.0178 95.5056 72.8464 95.3933 95.5056 5.1517
0.0279 61.0 2196 0.0178 95.5056 72.8464 95.3933 95.5056 5.1517
0.0279 62.0 2232 0.0178 95.5056 72.8464 95.3933 95.5056 5.1517
0.0279 63.0 2268 0.0178 95.5056 72.8464 95.3933 95.5056 5.1517
0.0279 64.0 2304 0.0178 95.5056 72.8464 95.3933 95.5056 5.1517
0.0279 65.0 2340 0.0178 95.5056 72.8464 95.3933 95.5056 5.1517
0.0279 66.0 2376 0.0178 95.5056 72.8464 95.3933 95.5056 5.1517
0.0279 67.0 2412 0.0178 95.5056 72.8464 95.3933 95.5056 5.1517
0.0279 68.0 2448 0.0178 95.5056 72.8464 95.3933 95.5056 5.1517
0.0279 69.0 2484 0.0178 95.5056 72.8464 95.3933 95.5056 5.1517
0.0274 70.0 2520 0.0178 95.5056 72.8464 95.3933 95.5056 5.1517

Framework versions

  • Transformers 4.36.2
  • Pytorch 1.13.1
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
7
Safetensors
Model size
15.6M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for CodeIsAbstract/denoice-finetuned-xsum

Finetuned
(7)
this model