Edit model card

arabert_cross_organization_task1_fold4

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4588
  • Qwk: 0.6885
  • Mse: 0.4588

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.125 2 3.0429 0.0044 3.0429
No log 0.25 4 1.6578 0.1373 1.6578
No log 0.375 6 0.9445 0.3393 0.9445
No log 0.5 8 0.7446 0.4700 0.7446
No log 0.625 10 0.8109 0.4239 0.8109
No log 0.75 12 0.5652 0.6068 0.5652
No log 0.875 14 0.6877 0.6129 0.6877
No log 1.0 16 0.5401 0.6022 0.5401
No log 1.125 18 0.5571 0.5613 0.5571
No log 1.25 20 0.4854 0.6440 0.4854
No log 1.375 22 0.5443 0.7366 0.5443
No log 1.5 24 0.5077 0.7444 0.5077
No log 1.625 26 0.5015 0.6266 0.5015
No log 1.75 28 0.5012 0.6164 0.5012
No log 1.875 30 0.4504 0.7043 0.4504
No log 2.0 32 0.4864 0.7187 0.4864
No log 2.125 34 0.4305 0.7243 0.4305
No log 2.25 36 0.4572 0.6579 0.4572
No log 2.375 38 0.4545 0.7032 0.4545
No log 2.5 40 0.4159 0.7123 0.4159
No log 2.625 42 0.4122 0.7591 0.4122
No log 2.75 44 0.4424 0.7617 0.4424
No log 2.875 46 0.4110 0.7600 0.4110
No log 3.0 48 0.3993 0.7372 0.3993
No log 3.125 50 0.3990 0.7391 0.3990
No log 3.25 52 0.3923 0.7306 0.3923
No log 3.375 54 0.4375 0.7685 0.4375
No log 3.5 56 0.4628 0.7698 0.4628
No log 3.625 58 0.4089 0.7365 0.4089
No log 3.75 60 0.4113 0.7238 0.4113
No log 3.875 62 0.4117 0.7308 0.4117
No log 4.0 64 0.4183 0.7175 0.4183
No log 4.125 66 0.4326 0.7175 0.4326
No log 4.25 68 0.4439 0.7360 0.4439
No log 4.375 70 0.4530 0.7375 0.4530
No log 4.5 72 0.4458 0.7040 0.4458
No log 4.625 74 0.4431 0.7054 0.4431
No log 4.75 76 0.4403 0.6980 0.4403
No log 4.875 78 0.4350 0.7144 0.4350
No log 5.0 80 0.4311 0.7511 0.4311
No log 5.125 82 0.4257 0.7418 0.4257
No log 5.25 84 0.4298 0.7174 0.4298
No log 5.375 86 0.4420 0.6877 0.4420
No log 5.5 88 0.4344 0.7174 0.4344
No log 5.625 90 0.4324 0.7146 0.4324
No log 5.75 92 0.4363 0.7566 0.4363
No log 5.875 94 0.4499 0.7689 0.4499
No log 6.0 96 0.4217 0.7367 0.4217
No log 6.125 98 0.4252 0.7237 0.4252
No log 6.25 100 0.4235 0.7141 0.4235
No log 6.375 102 0.4211 0.7230 0.4211
No log 6.5 104 0.4285 0.7493 0.4285
No log 6.625 106 0.4367 0.7530 0.4367
No log 6.75 108 0.4214 0.7457 0.4214
No log 6.875 110 0.4380 0.6930 0.4380
No log 7.0 112 0.4555 0.6727 0.4555
No log 7.125 114 0.4358 0.6947 0.4358
No log 7.25 116 0.4270 0.7277 0.4270
No log 7.375 118 0.4349 0.7457 0.4349
No log 7.5 120 0.4430 0.7382 0.4430
No log 7.625 122 0.4539 0.7257 0.4539
No log 7.75 124 0.4623 0.7204 0.4623
No log 7.875 126 0.4640 0.7110 0.4640
No log 8.0 128 0.4644 0.7115 0.4644
No log 8.125 130 0.4639 0.7095 0.4639
No log 8.25 132 0.4612 0.7073 0.4612
No log 8.375 134 0.4652 0.6865 0.4652
No log 8.5 136 0.4689 0.6753 0.4689
No log 8.625 138 0.4608 0.6849 0.4608
No log 8.75 140 0.4553 0.6907 0.4553
No log 8.875 142 0.4538 0.6930 0.4538
No log 9.0 144 0.4537 0.7172 0.4537
No log 9.125 146 0.4564 0.7273 0.4564
No log 9.25 148 0.4582 0.7294 0.4582
No log 9.375 150 0.4572 0.7267 0.4572
No log 9.5 152 0.4559 0.7093 0.4559
No log 9.625 154 0.4566 0.7000 0.4566
No log 9.75 156 0.4582 0.6885 0.4582
No log 9.875 158 0.4588 0.6885 0.4588
No log 10.0 160 0.4588 0.6885 0.4588

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
11
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_organization_task1_fold4

Finetuned
(296)
this model