Edit model card

arabert_cross_vocabulary_task1_fold3

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6543
  • Qwk: 0.8547
  • Mse: 0.6543

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.1333 2 2.1991 0.1474 2.1991
No log 0.2667 4 1.6062 0.1520 1.6062
No log 0.4 6 1.5937 0.3436 1.5937
No log 0.5333 8 1.1613 0.6120 1.1613
No log 0.6667 10 1.0503 0.7478 1.0503
No log 0.8 12 0.7718 0.7837 0.7718
No log 0.9333 14 0.6578 0.7874 0.6578
No log 1.0667 16 0.6200 0.7876 0.6200
No log 1.2 18 0.6610 0.8102 0.6610
No log 1.3333 20 0.6508 0.8233 0.6508
No log 1.4667 22 0.6181 0.8360 0.6181
No log 1.6 24 0.6494 0.8333 0.6494
No log 1.7333 26 0.5220 0.7872 0.5220
No log 1.8667 28 0.5440 0.8098 0.5440
No log 2.0 30 0.6917 0.8299 0.6917
No log 2.1333 32 0.7100 0.8241 0.7100
No log 2.2667 34 0.6018 0.8256 0.6018
No log 2.4 36 0.5920 0.8362 0.5920
No log 2.5333 38 0.5900 0.8254 0.5900
No log 2.6667 40 0.5183 0.8071 0.5183
No log 2.8 42 0.5565 0.8214 0.5565
No log 2.9333 44 0.5828 0.8308 0.5828
No log 3.0667 46 0.6086 0.8345 0.6086
No log 3.2 48 0.9512 0.8040 0.9512
No log 3.3333 50 1.0514 0.7892 1.0514
No log 3.4667 52 0.7574 0.8307 0.7574
No log 3.6 54 0.5438 0.8157 0.5438
No log 3.7333 56 0.5688 0.8295 0.5688
No log 3.8667 58 0.7090 0.8272 0.7090
No log 4.0 60 0.6153 0.8315 0.6153
No log 4.1333 62 0.5938 0.8311 0.5938
No log 4.2667 64 0.5535 0.8284 0.5535
No log 4.4 66 0.7113 0.8397 0.7113
No log 4.5333 68 0.7956 0.8396 0.7956
No log 4.6667 70 0.6577 0.8415 0.6577
No log 4.8 72 0.5410 0.8175 0.5410
No log 4.9333 74 0.5447 0.8164 0.5447
No log 5.0667 76 0.6400 0.8362 0.6400
No log 5.2 78 0.6465 0.8350 0.6465
No log 5.3333 80 0.5521 0.8190 0.5521
No log 5.4667 82 0.5413 0.8159 0.5413
No log 5.6 84 0.5904 0.8356 0.5904
No log 5.7333 86 0.6050 0.8329 0.6050
No log 5.8667 88 0.5617 0.8344 0.5617
No log 6.0 90 0.5078 0.8086 0.5078
No log 6.1333 92 0.5744 0.8431 0.5744
No log 6.2667 94 0.8177 0.8508 0.8177
No log 6.4 96 0.9516 0.8372 0.9516
No log 6.5333 98 0.8522 0.8514 0.8522
No log 6.6667 100 0.6484 0.8513 0.6484
No log 6.8 102 0.5556 0.8426 0.5556
No log 6.9333 104 0.5410 0.8334 0.5410
No log 7.0667 106 0.6003 0.8457 0.6003
No log 7.2 108 0.6955 0.8578 0.6955
No log 7.3333 110 0.6870 0.8586 0.6870
No log 7.4667 112 0.6444 0.8510 0.6444
No log 7.6 114 0.5518 0.8355 0.5518
No log 7.7333 116 0.5347 0.8285 0.5347
No log 7.8667 118 0.5790 0.8402 0.5790
No log 8.0 120 0.6668 0.8499 0.6668
No log 8.1333 122 0.6878 0.8492 0.6878
No log 8.2667 124 0.6406 0.8545 0.6406
No log 8.4 126 0.6005 0.8510 0.6005
No log 8.5333 128 0.6017 0.8494 0.6017
No log 8.6667 130 0.5940 0.8481 0.5940
No log 8.8 132 0.5792 0.8445 0.5792
No log 8.9333 134 0.5857 0.8481 0.5857
No log 9.0667 136 0.6217 0.8492 0.6217
No log 9.2 138 0.6735 0.8562 0.6735
No log 9.3333 140 0.7067 0.8519 0.7067
No log 9.4667 142 0.7028 0.8584 0.7028
No log 9.6 144 0.7006 0.8584 0.7006
No log 9.7333 146 0.6820 0.8583 0.6820
No log 9.8667 148 0.6634 0.8629 0.6634
No log 10.0 150 0.6543 0.8547 0.6543

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_vocabulary_task1_fold3

Finetuned
(690)
this model