Edit model card

liputan6-base

This model is a fine-tuned version of LazarusNLP/IndoNanoT5-base on the id_liputan6 canonical dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5539
  • Rouge1: 40.3998
  • Rouge2: 30.0512
  • Rougel: 37.1464
  • Rougelsum: 39.0852
  • Gen Len: 56.486

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 16
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5.0

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.6488 1.0 63 0.7318 34.224 24.5266 31.0318 32.8875 65.191
0.6983 2.0 126 0.6433 37.3155 27.3019 33.9529 36.1013 65.46
0.4226 3.0 189 0.5831 36.9679 26.3535 33.5956 35.7604 59.969
0.242 4.0 252 0.5539 39.0802 28.4622 35.8085 37.8181 55.301
0.1248 5.0 315 0.5170 38.108 27.5573 34.7198 36.6919 56.589

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
3
Safetensors
Model size
248M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for apwic/liputan6-base

Finetuned
(53)
this model

Dataset used to train apwic/liputan6-base

Evaluation results