salbatarni's picture
End of training
a4ed899 verified
metadata
base_model: aubmindlab/bert-base-arabertv02
tags:
  - generated_from_trainer
model-index:
  - name: arabert_cross_vocabulary_task2_fold5
    results: []

arabert_cross_vocabulary_task2_fold5

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3302
  • Qwk: 0.7915
  • Mse: 0.3302

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0351 2 1.9408 0.0725 1.9408
No log 0.0702 4 1.1558 0.2495 1.1558
No log 0.1053 6 0.8278 0.5475 0.8278
No log 0.1404 8 0.9051 0.5890 0.9051
No log 0.1754 10 1.0541 0.6265 1.0541
No log 0.2105 12 0.9617 0.6991 0.9617
No log 0.2456 14 0.6753 0.8007 0.6753
No log 0.2807 16 0.4157 0.7665 0.4157
No log 0.3158 18 0.4162 0.7231 0.4162
No log 0.3509 20 0.3927 0.7223 0.3927
No log 0.3860 22 0.3990 0.7945 0.3990
No log 0.4211 24 0.4409 0.8243 0.4409
No log 0.4561 26 0.4391 0.8320 0.4391
No log 0.4912 28 0.3760 0.8013 0.3760
No log 0.5263 30 0.3677 0.7769 0.3677
No log 0.5614 32 0.3664 0.8062 0.3664
No log 0.5965 34 0.3925 0.8156 0.3925
No log 0.6316 36 0.4350 0.8384 0.4350
No log 0.6667 38 0.4400 0.8249 0.4400
No log 0.7018 40 0.4146 0.8159 0.4146
No log 0.7368 42 0.3750 0.8086 0.3750
No log 0.7719 44 0.3465 0.7933 0.3465
No log 0.8070 46 0.3355 0.7897 0.3355
No log 0.8421 48 0.3314 0.7870 0.3314
No log 0.8772 50 0.3317 0.7828 0.3317
No log 0.9123 52 0.3312 0.7828 0.3312
No log 0.9474 54 0.3303 0.7906 0.3303
No log 0.9825 56 0.3302 0.7915 0.3302

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1