salbatarni's picture
End of training
6fcd34e verified
metadata
base_model: aubmindlab/bert-base-arabertv02
tags:
  - generated_from_trainer
model-index:
  - name: arabert_cross_vocabulary_task4_fold3
    results: []

arabert_cross_vocabulary_task4_fold3

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9487
  • Qwk: 0.7917
  • Mse: 0.9487

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0308 2 2.5656 0.1026 2.5656
No log 0.0615 4 1.5800 0.2273 1.5800
No log 0.0923 6 1.4668 0.3238 1.4668
No log 0.1231 8 1.7439 0.5345 1.7439
No log 0.1538 10 1.8731 0.5182 1.8731
No log 0.1846 12 1.6136 0.5153 1.6136
No log 0.2154 14 1.0770 0.6447 1.0770
No log 0.2462 16 0.8334 0.6452 0.8334
No log 0.2769 18 0.8987 0.6213 0.8987
No log 0.3077 20 1.0528 0.6192 1.0528
No log 0.3385 22 1.1536 0.6127 1.1536
No log 0.3692 24 1.1237 0.6677 1.1237
No log 0.4 26 1.0225 0.7612 1.0225
No log 0.4308 28 0.9282 0.7931 0.9282
No log 0.4615 30 0.8550 0.7997 0.8550
No log 0.4923 32 0.8655 0.8051 0.8655
No log 0.5231 34 0.8784 0.7955 0.8784
No log 0.5538 36 0.9842 0.7843 0.9842
No log 0.5846 38 0.9740 0.7860 0.9740
No log 0.6154 40 0.9578 0.7855 0.9578
No log 0.6462 42 0.8675 0.7868 0.8675
No log 0.6769 44 0.8691 0.7893 0.8691
No log 0.7077 46 0.9121 0.7886 0.9121
No log 0.7385 48 0.9594 0.7961 0.9594
No log 0.7692 50 0.9137 0.7879 0.9137
No log 0.8 52 0.8811 0.7869 0.8811
No log 0.8308 54 0.8791 0.7869 0.8791
No log 0.8615 56 0.9161 0.7964 0.9161
No log 0.8923 58 0.9349 0.7936 0.9349
No log 0.9231 60 0.9520 0.7971 0.9520
No log 0.9538 62 0.9575 0.7917 0.9575
No log 0.9846 64 0.9487 0.7917 0.9487

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1