Edit model card

arabert_cross_relevance_task1_fold0

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2694
  • Qwk: 0.0109
  • Mse: 0.2694

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.1333 2 2.2712 0.0 2.2712
No log 0.2667 4 0.8815 0.0019 0.8815
No log 0.4 6 0.2576 0.0339 0.2576
No log 0.5333 8 0.2104 0.0276 0.2104
No log 0.6667 10 0.3374 0.0345 0.3374
No log 0.8 12 0.2690 0.0323 0.2690
No log 0.9333 14 0.1682 0.0249 0.1682
No log 1.0667 16 0.1503 0.0339 0.1503
No log 1.2 18 0.1672 0.0339 0.1672
No log 1.3333 20 0.2563 0.0263 0.2563
No log 1.4667 22 0.3596 0.0519 0.3596
No log 1.6 24 0.3144 0.0507 0.3144
No log 1.7333 26 0.1940 0.0263 0.1940
No log 1.8667 28 0.1460 0.0400 0.1460
No log 2.0 30 0.1369 0.0364 0.1369
No log 2.1333 32 0.1324 0.0372 0.1324
No log 2.2667 34 0.1415 0.0339 0.1415
No log 2.4 36 0.1809 0.0339 0.1809
No log 2.5333 38 0.2294 0.0339 0.2294
No log 2.6667 40 0.2654 0.0281 0.2654
No log 2.8 42 0.2940 0.0245 0.2940
No log 2.9333 44 0.2676 0.0245 0.2676
No log 3.0667 46 0.2359 0.0245 0.2359
No log 3.2 48 0.2040 0.0263 0.2040
No log 3.3333 50 0.1696 0.0300 0.1696
No log 3.4667 52 0.1616 0.0339 0.1616
No log 3.6 54 0.1702 0.0319 0.1702
No log 3.7333 56 0.2010 0.0281 0.2010
No log 3.8667 58 0.2570 0.0228 0.2570
No log 4.0 60 0.3107 0.0327 0.3107
No log 4.1333 62 0.3132 0.0327 0.3132
No log 4.2667 64 0.2646 0.0288 0.2646
No log 4.4 66 0.2086 0.0263 0.2086
No log 4.5333 68 0.1708 0.0300 0.1708
No log 4.6667 70 0.1646 0.0179 0.1646
No log 4.8 72 0.1786 0.0144 0.1786
No log 4.9333 74 0.2198 0.0281 0.2198
No log 5.0667 76 0.2585 0.0245 0.2585
No log 5.2 78 0.2513 0.0263 0.2513
No log 5.3333 80 0.2441 0.0263 0.2441
No log 5.4667 82 0.2186 0.0197 0.2186
No log 5.6 84 0.2061 0.0197 0.2061
No log 5.7333 86 0.2178 0.0197 0.2178
No log 5.8667 88 0.2322 0.0197 0.2322
No log 6.0 90 0.2425 0.0245 0.2425
No log 6.1333 92 0.2585 0.0245 0.2585
No log 6.2667 94 0.2369 0.0195 0.2369
No log 6.4 96 0.2110 0.0158 0.2110
No log 6.5333 98 0.2006 0.0158 0.2006
No log 6.6667 100 0.2175 0.0158 0.2175
No log 6.8 102 0.2464 0.0210 0.2464
No log 6.9333 104 0.2615 0.0225 0.2615
No log 7.0667 106 0.2784 0.0288 0.2784
No log 7.2 108 0.2991 0.0327 0.2991
No log 7.3333 110 0.2942 0.0327 0.2942
No log 7.4667 112 0.3083 0.0399 0.3083
No log 7.6 114 0.2910 0.0221 0.2910
No log 7.7333 116 0.2561 0.0093 0.2561
No log 7.8667 118 0.2246 0.0125 0.2246
No log 8.0 120 0.2152 0.0156 0.2152
No log 8.1333 122 0.2131 0.0156 0.2131
No log 8.2667 124 0.2238 0.0140 0.2238
No log 8.4 126 0.2341 0.0123 0.2341
No log 8.5333 128 0.2459 0.0123 0.2459
No log 8.6667 130 0.2510 0.0123 0.2510
No log 8.8 132 0.2702 0.0109 0.2702
No log 8.9333 134 0.2776 0.0093 0.2776
No log 9.0667 136 0.2862 0.0173 0.2862
No log 9.2 138 0.2916 0.0158 0.2916
No log 9.3333 140 0.2853 0.0158 0.2853
No log 9.4667 142 0.2781 0.0173 0.2781
No log 9.6 144 0.2737 0.0093 0.2737
No log 9.7333 146 0.2709 0.0109 0.2709
No log 9.8667 148 0.2693 0.0109 0.2693
No log 10.0 150 0.2694 0.0109 0.2694

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
8
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_relevance_task1_fold0

Finetuned
(296)
this model