Edit model card

arabert_cross_organization_task3_fold2

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6024
  • Qwk: 0.0538
  • Mse: 1.6024

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.1111 2 4.4408 0.0040 4.4408
No log 0.2222 4 2.1445 -0.0019 2.1445
No log 0.3333 6 1.2335 -0.0070 1.2335
No log 0.4444 8 1.2037 -0.0160 1.2037
No log 0.5556 10 1.2041 -0.0073 1.2041
No log 0.6667 12 1.1517 0.0051 1.1517
No log 0.7778 14 1.2374 0.0328 1.2374
No log 0.8889 16 1.1607 -0.0476 1.1607
No log 1.0 18 1.1734 -0.0518 1.1734
No log 1.1111 20 1.2625 -0.0809 1.2625
No log 1.2222 22 1.3757 0.0003 1.3757
No log 1.3333 24 1.1818 -0.1060 1.1818
No log 1.4444 26 1.2926 0.0 1.2926
No log 1.5556 28 1.6227 -0.0906 1.6227
No log 1.6667 30 1.8505 0.0618 1.8505
No log 1.7778 32 1.4810 0.0402 1.4810
No log 1.8889 34 1.6514 0.0454 1.6514
No log 2.0 36 1.9399 0.0198 1.9399
No log 2.1111 38 1.6927 0.0351 1.6927
No log 2.2222 40 1.5365 0.0112 1.5365
No log 2.3333 42 1.7530 0.0199 1.7530
No log 2.4444 44 1.5376 0.0302 1.5376
No log 2.5556 46 1.4487 -0.0223 1.4487
No log 2.6667 48 1.7456 0.0144 1.7456
No log 2.7778 50 1.9590 0.0498 1.9590
No log 2.8889 52 1.6503 0.0145 1.6503
No log 3.0 54 1.4764 -0.0314 1.4764
No log 3.1111 56 1.6835 0.0294 1.6835
No log 3.2222 58 1.7955 0.0312 1.7955
No log 3.3333 60 1.7538 0.0635 1.7538
No log 3.4444 62 1.4927 0.0176 1.4927
No log 3.5556 64 1.6904 0.0720 1.6904
No log 3.6667 66 1.8393 0.0312 1.8393
No log 3.7778 68 1.8313 0.0254 1.8313
No log 3.8889 70 1.6934 0.0842 1.6934
No log 4.0 72 1.5049 0.0730 1.5049
No log 4.1111 74 1.6026 0.0175 1.6026
No log 4.2222 76 1.5882 0.0325 1.5882
No log 4.3333 78 1.6196 0.0264 1.6196
No log 4.4444 80 1.5514 0.0974 1.5514
No log 4.5556 82 1.4767 0.0909 1.4767
No log 4.6667 84 1.2836 0.1034 1.2836
No log 4.7778 86 1.3626 0.0270 1.3626
No log 4.8889 88 1.6465 0.0229 1.6465
No log 5.0 90 1.8343 -0.0220 1.8343
No log 5.1111 92 1.6557 0.0083 1.6557
No log 5.2222 94 1.3225 -0.0058 1.3225
No log 5.3333 96 1.2482 0.0470 1.2482
No log 5.4444 98 1.3076 0.0607 1.3076
No log 5.5556 100 1.6016 0.0775 1.6016
No log 5.6667 102 1.8225 0.0197 1.8225
No log 5.7778 104 1.7541 -0.0002 1.7541
No log 5.8889 106 1.5325 0.0489 1.5325
No log 6.0 108 1.4866 0.1203 1.4866
No log 6.1111 110 1.5168 0.0747 1.5168
No log 6.2222 112 1.5806 0.0628 1.5806
No log 6.3333 114 1.5301 0.0840 1.5301
No log 6.4444 116 1.5252 0.1176 1.5252
No log 6.5556 118 1.6195 0.0414 1.6195
No log 6.6667 120 1.7519 -0.0347 1.7519
No log 6.7778 122 1.7121 0.0055 1.7121
No log 6.8889 124 1.5662 0.0951 1.5662
No log 7.0 126 1.5098 0.1301 1.5098
No log 7.1111 128 1.5746 0.12 1.5746
No log 7.2222 130 1.7267 -0.0092 1.7267
No log 7.3333 132 1.7245 -0.0092 1.7245
No log 7.4444 134 1.5965 0.0502 1.5965
No log 7.5556 136 1.5245 0.0966 1.5245
No log 7.6667 138 1.4468 0.0095 1.4468
No log 7.7778 140 1.4564 0.0425 1.4564
No log 7.8889 142 1.5131 0.0840 1.5131
No log 8.0 144 1.6343 0.0606 1.6343
No log 8.1111 146 1.6708 0.0339 1.6708
No log 8.2222 148 1.7099 0.0339 1.7099
No log 8.3333 150 1.7246 0.0339 1.7246
No log 8.4444 152 1.7128 0.0339 1.7128
No log 8.5556 154 1.6734 0.0657 1.6734
No log 8.6667 156 1.6578 0.0520 1.6578
No log 8.7778 158 1.7162 0.0533 1.7162
No log 8.8889 160 1.7512 -0.0403 1.7512
No log 9.0 162 1.7393 -0.0034 1.7393
No log 9.1111 164 1.6847 0.0542 1.6847
No log 9.2222 166 1.6059 0.0442 1.6059
No log 9.3333 168 1.5649 0.0881 1.5649
No log 9.4444 170 1.5449 0.1090 1.5449
No log 9.5556 172 1.5536 0.1090 1.5536
No log 9.6667 174 1.5814 0.0419 1.5814
No log 9.7778 176 1.5969 0.0560 1.5969
No log 9.8889 178 1.6011 0.0560 1.6011
No log 10.0 180 1.6024 0.0538 1.6024

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_organization_task3_fold2

Finetuned
(688)
this model