salbatarni's picture
End of training
43e92b1 verified
metadata
base_model: aubmindlab/bert-base-arabertv02
tags:
  - generated_from_trainer
model-index:
  - name: arabert_cross_vocabulary_task7_fold2
    results: []

arabert_cross_vocabulary_task7_fold2

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7372
  • Qwk: 0.0
  • Mse: 0.7250

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0317 2 3.4222 0.0005 3.4448
No log 0.0635 4 0.8344 -0.0075 0.8385
No log 0.0952 6 0.6004 0.0456 0.5963
No log 0.1270 8 0.5812 -0.0227 0.5744
No log 0.1587 10 0.6585 0.0 0.6498
No log 0.1905 12 0.6873 0.0 0.6781
No log 0.2222 14 0.6603 0.0 0.6514
No log 0.2540 16 0.5985 -0.0496 0.5919
No log 0.2857 18 0.5749 -0.0344 0.5697
No log 0.3175 20 0.5691 -0.0546 0.5649
No log 0.3492 22 0.5753 -0.0344 0.5700
No log 0.3810 24 0.6027 -0.0886 0.5953
No log 0.4127 26 0.6013 -0.0886 0.5936
No log 0.4444 28 0.6360 -0.0252 0.6265
No log 0.4762 30 0.6499 -0.0252 0.6399
No log 0.5079 32 0.7649 0.0 0.7525
No log 0.5397 34 0.8350 0.0 0.8221
No log 0.5714 36 0.8330 0.0 0.8214
No log 0.6032 38 0.7620 0.0 0.7524
No log 0.6349 40 0.7292 0.0 0.7198
No log 0.6667 42 0.6967 0.0 0.6875
No log 0.6984 44 0.7020 0.0 0.6922
No log 0.7302 46 0.7157 0.0 0.7048
No log 0.7619 48 0.7198 0.0 0.7083
No log 0.7937 50 0.6970 0.0 0.6856
No log 0.8254 52 0.6743 0.0 0.6632
No log 0.8571 54 0.6647 0.0 0.6537
No log 0.8889 56 0.6819 0.0 0.6706
No log 0.9206 58 0.7064 0.0 0.6946
No log 0.9524 60 0.7251 0.0 0.7130
No log 0.9841 62 0.7372 0.0 0.7250

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1