salbatarni's picture
End of training
d82c243 verified
|
raw
history blame
No virus
3.31 kB
metadata
base_model: aubmindlab/bert-base-arabertv02
tags:
  - generated_from_trainer
model-index:
  - name: arabert_cross_vocabulary_task1_fold0
    results: []

arabert_cross_vocabulary_task1_fold0

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9107
  • Qwk: 0.3160
  • Mse: 0.9107

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0351 2 3.6812 0.0124 3.6812
No log 0.0702 4 2.2449 0.0807 2.2449
No log 0.1053 6 1.7920 0.1291 1.7920
No log 0.1404 8 1.1077 0.2184 1.1077
No log 0.1754 10 1.6727 0.2157 1.6727
No log 0.2105 12 2.3411 0.1852 2.3411
No log 0.2456 14 1.4252 0.2951 1.4252
No log 0.2807 16 0.8885 0.3981 0.8885
No log 0.3158 18 0.6824 0.4387 0.6824
No log 0.3509 20 0.6604 0.4473 0.6604
No log 0.3860 22 0.7208 0.3880 0.7208
No log 0.4211 24 1.1639 0.2846 1.1639
No log 0.4561 26 2.0330 0.1689 2.0330
No log 0.4912 28 2.2500 0.1485 2.2500
No log 0.5263 30 1.8145 0.1758 1.8145
No log 0.5614 32 1.1982 0.2547 1.1982
No log 0.5965 34 0.8111 0.3192 0.8111
No log 0.6316 36 0.7359 0.3443 0.7359
No log 0.6667 38 0.8012 0.3164 0.8012
No log 0.7018 40 0.9036 0.2985 0.9036
No log 0.7368 42 1.0075 0.2804 1.0075
No log 0.7719 44 1.0761 0.2855 1.0761
No log 0.8070 46 1.0400 0.2883 1.0400
No log 0.8421 48 1.0379 0.2963 1.0379
No log 0.8772 50 1.0163 0.3002 1.0163
No log 0.9123 52 0.9760 0.3168 0.9760
No log 0.9474 54 0.9286 0.3206 0.9286
No log 0.9825 56 0.9107 0.3160 0.9107

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1