salbatarni's picture
End of training
e8b415e verified
metadata
base_model: aubmindlab/bert-base-arabertv02
tags:
  - generated_from_trainer
model-index:
  - name: arabert_baseline_vocabulary_task5_fold0
    results: []

arabert_baseline_vocabulary_task5_fold0

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8715
  • Qwk: 0.6491
  • Mse: 0.8715

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.3333 2 2.2168 0.1714 2.2168
No log 0.6667 4 2.0041 0.0 2.0041
No log 1.0 6 1.8058 0.0 1.8058
No log 1.3333 8 1.6307 0.0 1.6307
No log 1.6667 10 1.5228 0.1055 1.5228
No log 2.0 12 1.4956 0.1325 1.4956
No log 2.3333 14 1.4414 0.1842 1.4414
No log 2.6667 16 1.3583 0.3623 1.3583
No log 3.0 18 1.2822 0.4602 1.2822
No log 3.3333 20 1.2058 0.4783 1.2058
No log 3.6667 22 1.1388 0.5430 1.1388
No log 4.0 24 1.0848 0.5778 1.0848
No log 4.3333 26 1.0520 0.5161 1.0520
No log 4.6667 28 1.0213 0.5161 1.0213
No log 5.0 30 0.9946 0.4866 0.9946
No log 5.3333 32 0.9759 0.5368 0.9759
No log 5.6667 34 0.9511 0.5368 0.9511
No log 6.0 36 0.9260 0.5662 0.9260
No log 6.3333 38 0.9142 0.5662 0.9142
No log 6.6667 40 0.8995 0.5662 0.8995
No log 7.0 42 0.9024 0.5940 0.9024
No log 7.3333 44 0.9102 0.5368 0.9102
No log 7.6667 46 0.9036 0.6258 0.9036
No log 8.0 48 0.8943 0.6304 0.8943
No log 8.3333 50 0.8872 0.6042 0.8872
No log 8.6667 52 0.8828 0.6042 0.8828
No log 9.0 54 0.8785 0.6491 0.8785
No log 9.3333 56 0.8747 0.6491 0.8747
No log 9.6667 58 0.8725 0.6491 0.8725
No log 10.0 60 0.8715 0.6491 0.8715

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1