toanduc's picture
End of training
7849f75 verified
|
raw
history blame
3.2 kB
metadata
base_model: allenai/PRIMERA-multinews
library_name: peft
metrics:
  - rouge
tags:
  - generated_from_trainer
model-index:
  - name: PRIMERA-multinews-lora-finetuned
    results: []

PRIMERA-multinews-lora-finetuned

This model is a fine-tuned version of allenai/PRIMERA-multinews on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6767
  • Rouge1: 13.1661
  • Rouge2: 6.075
  • Rougel: 11.1948
  • Rougelsum: 12.1382
  • Gen Len: 20.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 16
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
2.2385 1.0 4725 1.9183 15.8308 7.6039 12.6624 14.1866 20.0
2.1419 2.0 9450 1.8933 14.8545 6.8893 12.144 13.4623 20.0
2.1286 3.0 14175 1.8619 16.2585 8.1431 13.4226 14.8653 20.0
2.0669 4.0 18900 1.8129 15.9624 7.5293 13.3809 14.7765 20.0
2.0448 5.0 23625 1.7636 16.2801 8.045 13.6178 14.996 20.0
1.9831 6.0 28350 1.7037 13.8735 5.9956 10.8251 12.2545 20.0
1.9926 7.0 33075 1.7623 13.9591 5.8861 11.112 12.4349 20.0
1.99 8.0 37800 1.7247 13.1441 5.2565 10.7117 11.851 20.0
1.9495 9.0 42525 1.7065 12.4863 4.6444 10.0155 11.3874 20.0
1.9782 10.0 47250 1.6919 11.8394 4.0068 9.4554 10.6421 20.0
1.9087 11.0 51975 1.6910 13.011 5.5644 10.7255 11.8532 20.0
1.9693 12.0 56700 1.6872 13.2678 5.7966 11.0537 12.1103 20.0
1.9445 13.0 61425 1.7084 13.2757 5.9337 11.084 12.2354 20.0
1.9467 14.0 66150 1.6729 12.9202 5.424 10.5315 11.6661 20.0
1.9582 15.0 70875 1.6786 13.2851 6.0806 11.291 12.2518 20.0
1.9186 16.0 75600 1.6767 13.1661 6.075 11.1948 12.1382 20.0

Framework versions

  • PEFT 0.12.0
  • Transformers 4.43.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1