Edit model card

arabert_cross_organization_task6_fold6

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6971
  • Qwk: 0.5467
  • Mse: 0.6953

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.1176 2 2.1812 0.0813 2.1796
No log 0.2353 4 1.1581 0.1541 1.1560
No log 0.3529 6 0.9175 0.4300 0.9172
No log 0.4706 8 0.7853 0.5387 0.7849
No log 0.5882 10 0.7421 0.3500 0.7415
No log 0.7059 12 0.7475 0.3575 0.7469
No log 0.8235 14 0.5277 0.6055 0.5272
No log 0.9412 16 0.6059 0.7252 0.6066
No log 1.0588 18 0.5616 0.7136 0.5620
No log 1.1765 20 0.5211 0.6304 0.5204
No log 1.2941 22 0.6860 0.5190 0.6841
No log 1.4118 24 0.6738 0.5216 0.6721
No log 1.5294 26 0.5480 0.6313 0.5472
No log 1.6471 28 0.5424 0.7060 0.5428
No log 1.7647 30 0.4918 0.6873 0.4920
No log 1.8824 32 0.5098 0.5684 0.5094
No log 2.0 34 0.5437 0.5294 0.5430
No log 2.1176 36 0.5312 0.5901 0.5302
No log 2.2353 38 0.5616 0.5966 0.5604
No log 2.3529 40 0.5882 0.5838 0.5868
No log 2.4706 42 0.5423 0.6000 0.5413
No log 2.5882 44 0.5067 0.6211 0.5059
No log 2.7059 46 0.4934 0.6349 0.4926
No log 2.8235 48 0.4940 0.6329 0.4932
No log 2.9412 50 0.5291 0.5677 0.5279
No log 3.0588 52 0.6166 0.5158 0.6151
No log 3.1765 54 0.6014 0.5588 0.5998
No log 3.2941 56 0.5316 0.5878 0.5303
No log 3.4118 58 0.5135 0.5990 0.5124
No log 3.5294 60 0.5285 0.5827 0.5273
No log 3.6471 62 0.5943 0.5492 0.5929
No log 3.7647 64 0.5882 0.5620 0.5868
No log 3.8824 66 0.5237 0.5937 0.5227
No log 4.0 68 0.5270 0.6150 0.5261
No log 4.1176 70 0.5820 0.5589 0.5806
No log 4.2353 72 0.6445 0.5284 0.6429
No log 4.3529 74 0.6153 0.5627 0.6139
No log 4.4706 76 0.6066 0.5783 0.6054
No log 4.5882 78 0.6378 0.5639 0.6363
No log 4.7059 80 0.7155 0.5342 0.7135
No log 4.8235 82 0.7123 0.5305 0.7104
No log 4.9412 84 0.6786 0.5363 0.6769
No log 5.0588 86 0.6340 0.5611 0.6326
No log 5.1765 88 0.6050 0.5630 0.6038
No log 5.2941 90 0.6307 0.5564 0.6293
No log 5.4118 92 0.6603 0.5449 0.6588
No log 5.5294 94 0.6765 0.5483 0.6748
No log 5.6471 96 0.6364 0.5686 0.6351
No log 5.7647 98 0.6144 0.5967 0.6132
No log 5.8824 100 0.6315 0.5826 0.6300
No log 6.0 102 0.6964 0.5217 0.6946
No log 6.1176 104 0.6906 0.5310 0.6887
No log 6.2353 106 0.6656 0.5513 0.6639
No log 6.3529 108 0.6273 0.5829 0.6259
No log 6.4706 110 0.6354 0.5748 0.6340
No log 6.5882 112 0.6855 0.5397 0.6839
No log 6.7059 114 0.7228 0.5179 0.7211
No log 6.8235 116 0.6976 0.5206 0.6960
No log 6.9412 118 0.6558 0.5456 0.6544
No log 7.0588 120 0.6618 0.5569 0.6605
No log 7.1765 122 0.7088 0.5397 0.7072
No log 7.2941 124 0.8015 0.4900 0.7996
No log 7.4118 126 0.8354 0.4798 0.8334
No log 7.5294 128 0.7861 0.4973 0.7842
No log 7.6471 130 0.7081 0.5399 0.7065
No log 7.7647 132 0.6756 0.5725 0.6741
No log 7.8824 134 0.6874 0.5524 0.6859
No log 8.0 136 0.7225 0.5459 0.7207
No log 8.1176 138 0.7336 0.5368 0.7317
No log 8.2353 140 0.7330 0.5258 0.7312
No log 8.3529 142 0.7088 0.5474 0.7070
No log 8.4706 144 0.7009 0.5474 0.6991
No log 8.5882 146 0.6880 0.5489 0.6863
No log 8.7059 148 0.6695 0.5530 0.6679
No log 8.8235 150 0.6654 0.5587 0.6638
No log 8.9412 152 0.6764 0.5464 0.6747
No log 9.0588 154 0.6945 0.5419 0.6927
No log 9.1765 156 0.7081 0.5278 0.7063
No log 9.2941 158 0.7099 0.5292 0.7080
No log 9.4118 160 0.7043 0.5419 0.7025
No log 9.5294 162 0.7049 0.5430 0.7031
No log 9.6471 164 0.7019 0.5430 0.7000
No log 9.7647 166 0.6996 0.5467 0.6978
No log 9.8824 168 0.6977 0.5467 0.6959
No log 10.0 170 0.6971 0.5467 0.6953

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
9
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_organization_task6_fold6

Finetuned
(296)
this model