salbatarni's picture
End of training
12e070f verified
metadata
base_model: aubmindlab/bert-base-arabertv02
tags:
  - generated_from_trainer
model-index:
  - name: arabert_baseline_development_task5_fold1
    results: []

arabert_baseline_development_task5_fold1

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5703
  • Qwk: 0.7312
  • Mse: 0.5703

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.3333 2 2.7370 -0.0412 2.7370
No log 0.6667 4 1.1106 0.0254 1.1106
No log 1.0 6 0.8018 0.3448 0.8018
No log 1.3333 8 0.7112 0.4203 0.7112
No log 1.6667 10 0.7153 0.2778 0.7153
No log 2.0 12 0.6181 0.3333 0.6181
No log 2.3333 14 0.5263 0.4379 0.5263
No log 2.6667 16 0.5056 0.4444 0.5056
No log 3.0 18 0.4721 0.4318 0.4721
No log 3.3333 20 0.4560 0.5251 0.4560
No log 3.6667 22 0.4550 0.5506 0.4550
No log 4.0 24 0.5190 0.7399 0.5190
No log 4.3333 26 0.5822 0.7568 0.5822
No log 4.6667 28 0.5363 0.6328 0.5363
No log 5.0 30 0.5203 0.4382 0.5203
No log 5.3333 32 0.5859 0.4388 0.5859
No log 5.6667 34 0.5248 0.3956 0.5248
No log 6.0 36 0.5042 0.5977 0.5042
No log 6.3333 38 0.5446 0.6667 0.5446
No log 6.6667 40 0.5984 0.7312 0.5984
No log 7.0 42 0.6002 0.7312 0.6002
No log 7.3333 44 0.5662 0.7312 0.5662
No log 7.6667 46 0.5376 0.6667 0.5376
No log 8.0 48 0.5417 0.6667 0.5417
No log 8.3333 50 0.5612 0.7312 0.5612
No log 8.6667 52 0.5739 0.7312 0.5739
No log 9.0 54 0.5740 0.7312 0.5740
No log 9.3333 56 0.5707 0.7312 0.5707
No log 9.6667 58 0.5714 0.7312 0.5714
No log 10.0 60 0.5703 0.7312 0.5703

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1