Edit model card

arabert_cross_organization_task4_fold2

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7693
  • Qwk: -0.0357
  • Mse: 1.7659

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.1111 2 4.1886 0.0011 4.1850
No log 0.2222 4 2.2640 0.0 2.2614
No log 0.3333 6 1.3362 0.0183 1.3332
No log 0.4444 8 0.9155 0.0 0.9122
No log 0.5556 10 0.9188 -0.0048 0.9152
No log 0.6667 12 0.8902 -0.0315 0.8872
No log 0.7778 14 0.8950 0.0741 0.8923
No log 0.8889 16 0.8770 0.0875 0.8744
No log 1.0 18 0.8807 0.1030 0.8782
No log 1.1111 20 0.8915 -0.0099 0.8887
No log 1.2222 22 0.9457 0.0424 0.9431
No log 1.3333 24 1.0318 0.0182 1.0292
No log 1.4444 26 1.1586 0.0 1.1562
No log 1.5556 28 1.2936 0.0 1.2909
No log 1.6667 30 1.4219 0.0 1.4193
No log 1.7778 32 1.4842 0.0 1.4816
No log 1.8889 34 1.4765 0.0 1.4737
No log 2.0 36 1.4797 0.0 1.4769
No log 2.1111 38 1.4454 0.0 1.4425
No log 2.2222 40 1.5693 0.0 1.5665
No log 2.3333 42 1.6763 0.0 1.6737
No log 2.4444 44 1.6705 0.0 1.6677
No log 2.5556 46 1.4309 0.0182 1.4278
No log 2.6667 48 1.3201 0.0536 1.3169
No log 2.7778 50 1.5069 0.0182 1.5037
No log 2.8889 52 1.6390 0.0 1.6360
No log 3.0 54 1.5712 0.0 1.5682
No log 3.1111 56 1.4444 0.0182 1.4413
No log 3.2222 58 1.3827 0.0182 1.3796
No log 3.3333 60 1.5276 0.0 1.5245
No log 3.4444 62 1.5680 0.0025 1.5650
No log 3.5556 64 1.5963 0.0328 1.5931
No log 3.6667 66 1.5027 -0.0118 1.4994
No log 3.7778 68 1.5906 0.0485 1.5873
No log 3.8889 70 1.7316 0.0225 1.7284
No log 4.0 72 1.6752 0.0160 1.6719
No log 4.1111 74 1.4644 0.0136 1.4611
No log 4.2222 76 1.2946 0.0498 1.2914
No log 4.3333 78 1.3892 0.0571 1.3860
No log 4.4444 80 1.6935 0.0183 1.6903
No log 4.5556 82 1.8327 0.0026 1.8295
No log 4.6667 84 1.7518 0.0272 1.7486
No log 4.7778 86 1.5355 0.0437 1.5322
No log 4.8889 88 1.4546 0.0750 1.4513
No log 5.0 90 1.6165 0.0875 1.6132
No log 5.1111 92 1.8925 0.0054 1.8891
No log 5.2222 94 1.9400 -0.0784 1.9366
No log 5.3333 96 1.7667 0.0162 1.7634
No log 5.4444 98 1.5714 0.0700 1.5681
No log 5.5556 100 1.6393 0.0586 1.6360
No log 5.6667 102 1.7094 0.0144 1.7060
No log 5.7778 104 1.8384 0.0162 1.8349
No log 5.8889 106 1.7972 0.0130 1.7937
No log 6.0 108 1.7520 0.0113 1.7486
No log 6.1111 110 1.6669 0.0264 1.6636
No log 6.2222 112 1.6880 0.0301 1.6846
No log 6.3333 114 1.7562 0.0096 1.7528
No log 6.4444 116 1.7980 0.0228 1.7947
No log 6.5556 118 1.8220 -0.0199 1.8187
No log 6.6667 120 1.7461 0.0746 1.7429
No log 6.7778 122 1.5563 0.0272 1.5532
No log 6.8889 124 1.4185 0.0655 1.4154
No log 7.0 126 1.4690 0.0697 1.4658
No log 7.1111 128 1.6412 0.0722 1.6381
No log 7.2222 130 1.8101 -0.0325 1.8069
No log 7.3333 132 1.8161 -0.0588 1.8129
No log 7.4444 134 1.6991 0.0758 1.6959
No log 7.5556 136 1.5693 0.0476 1.5661
No log 7.6667 138 1.5215 0.0437 1.5183
No log 7.7778 140 1.5990 0.0699 1.5957
No log 7.8889 142 1.7382 0.0363 1.7348
No log 8.0 144 1.8943 -0.0587 1.8909
No log 8.1111 146 1.9468 -0.0675 1.9434
No log 8.2222 148 1.9074 -0.0587 1.9040
No log 8.3333 150 1.8074 -0.0208 1.8040
No log 8.4444 152 1.7220 0.0241 1.7186
No log 8.5556 154 1.6877 0.0352 1.6843
No log 8.6667 156 1.7113 0.0222 1.7079
No log 8.7778 158 1.7927 -0.0208 1.7893
No log 8.8889 160 1.8848 -0.0477 1.8813
No log 9.0 162 1.9430 -0.0613 1.9396
No log 9.1111 164 1.9388 -0.0613 1.9354
No log 9.2222 166 1.8894 -0.0477 1.8860
No log 9.3333 168 1.8314 -0.0424 1.8279
No log 9.4444 170 1.7625 -0.0357 1.7591
No log 9.5556 172 1.7335 0.0264 1.7302
No log 9.6667 174 1.7407 0.0264 1.7373
No log 9.7778 176 1.7576 -0.0102 1.7543
No log 9.8889 178 1.7644 -0.0357 1.7610
No log 10.0 180 1.7693 -0.0357 1.7659

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_organization_task4_fold2

Finetuned
(690)
this model