Edit model card

arabert_cross_organization_task7_fold3

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2477
  • Qwk: 0.0397
  • Mse: 1.2477

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.1176 2 4.6630 -0.0214 4.6630
No log 0.2353 4 1.9131 -0.0296 1.9131
No log 0.3529 6 1.4779 0.0333 1.4779
No log 0.4706 8 1.3898 -0.0031 1.3898
No log 0.5882 10 1.1632 -0.0182 1.1632
No log 0.7059 12 1.2046 0.0046 1.2046
No log 0.8235 14 1.1480 -0.0723 1.1480
No log 0.9412 16 1.1436 -0.0110 1.1436
No log 1.0588 18 1.1619 -0.0110 1.1619
No log 1.1765 20 1.1482 -0.0110 1.1482
No log 1.2941 22 1.1202 -0.0110 1.1202
No log 1.4118 24 1.1170 0.0062 1.1170
No log 1.5294 26 1.1174 0.0413 1.1174
No log 1.6471 28 1.1486 -0.0248 1.1486
No log 1.7647 30 1.2697 0.0 1.2697
No log 1.8824 32 1.2498 -0.0147 1.2498
No log 2.0 34 1.1692 -0.0248 1.1692
No log 2.1176 36 1.1913 -0.0203 1.1913
No log 2.2353 38 1.2360 -0.0255 1.2360
No log 2.3529 40 1.1562 -0.0046 1.1562
No log 2.4706 42 1.1715 -0.0074 1.1715
No log 2.5882 44 1.2562 -0.0147 1.2562
No log 2.7059 46 1.1883 -0.0039 1.1883
No log 2.8235 48 1.1458 0.0518 1.1458
No log 2.9412 50 1.2475 -0.0289 1.2475
No log 3.0588 52 1.3234 0.0059 1.3234
No log 3.1765 54 1.1838 -0.0002 1.1838
No log 3.2941 56 1.1405 0.0753 1.1405
No log 3.4118 58 1.1817 0.0686 1.1817
No log 3.5294 60 1.1663 0.0753 1.1663
No log 3.6471 62 1.1574 0.0944 1.1574
No log 3.7647 64 1.1895 0.0474 1.1895
No log 3.8824 66 1.1771 0.0160 1.1771
No log 4.0 68 1.1811 0.0725 1.1811
No log 4.1176 70 1.2361 0.0469 1.2361
No log 4.2353 72 1.2386 0.0650 1.2386
No log 4.3529 74 1.1740 0.1183 1.1740
No log 4.4706 76 1.1578 0.1142 1.1578
No log 4.5882 78 1.1478 0.0927 1.1478
No log 4.7059 80 1.1556 0.0890 1.1556
No log 4.8235 82 1.1743 0.0682 1.1743
No log 4.9412 84 1.1621 0.0682 1.1621
No log 5.0588 86 1.1329 0.0963 1.1329
No log 5.1765 88 1.1322 0.1113 1.1322
No log 5.2941 90 1.1440 0.1234 1.1440
No log 5.4118 92 1.2118 0.0376 1.2118
No log 5.5294 94 1.2862 0.0713 1.2862
No log 5.6471 96 1.3217 0.0599 1.3217
No log 5.7647 98 1.2121 0.0624 1.2121
No log 5.8824 100 1.1592 0.1501 1.1592
No log 6.0 102 1.1912 0.0385 1.1912
No log 6.1176 104 1.1673 0.0828 1.1673
No log 6.2353 106 1.1644 0.0448 1.1644
No log 6.3529 108 1.2438 0.0629 1.2438
No log 6.4706 110 1.3083 0.0622 1.3083
No log 6.5882 112 1.2970 0.0470 1.2970
No log 6.7059 114 1.2160 0.0295 1.2160
No log 6.8235 116 1.1682 0.0881 1.1682
No log 6.9412 118 1.1659 0.0898 1.1659
No log 7.0588 120 1.1694 0.1029 1.1694
No log 7.1765 122 1.1813 0.0829 1.1813
No log 7.2941 124 1.1840 0.0847 1.1840
No log 7.4118 126 1.1941 0.0847 1.1941
No log 7.5294 128 1.2296 0.0915 1.2296
No log 7.6471 130 1.2731 0.0561 1.2731
No log 7.7647 132 1.2863 0.0680 1.2863
No log 7.8824 134 1.2644 0.0391 1.2644
No log 8.0 136 1.2471 0.0771 1.2471
No log 8.1176 138 1.2423 0.0789 1.2423
No log 8.2353 140 1.2401 0.0596 1.2401
No log 8.3529 142 1.2475 0.0596 1.2475
No log 8.4706 144 1.2539 0.0703 1.2539
No log 8.5882 146 1.2593 0.0794 1.2593
No log 8.7059 148 1.2770 0.0551 1.2770
No log 8.8235 150 1.2937 0.0858 1.2937
No log 8.9412 152 1.2970 0.0840 1.2970
No log 9.0588 154 1.3005 0.0657 1.3005
No log 9.1765 156 1.2860 0.0657 1.2860
No log 9.2941 158 1.2799 0.0355 1.2799
No log 9.4118 160 1.2697 0.0407 1.2697
No log 9.5294 162 1.2616 0.0653 1.2616
No log 9.6471 164 1.2564 0.0374 1.2564
No log 9.7647 166 1.2527 0.0397 1.2527
No log 9.8824 168 1.2493 0.0397 1.2493
No log 10.0 170 1.2477 0.0397 1.2477

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
9
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_organization_task7_fold3

Finetuned
(296)
this model