Edit model card

arabert_cross_organization_task6_fold2

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0752
  • Qwk: 0.1211
  • Mse: 1.0727

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.1176 2 3.7816 0.0007 3.7767
No log 0.2353 4 1.3542 0.0008 1.3493
No log 0.3529 6 0.8381 0.0381 0.8355
No log 0.4706 8 0.8831 -0.0815 0.8796
No log 0.5882 10 0.8969 0.0429 0.8931
No log 0.7059 12 0.8250 0.0520 0.8218
No log 0.8235 14 1.0019 0.0182 0.9999
No log 0.9412 16 1.1717 -0.0025 1.1701
No log 1.0588 18 1.2136 -0.0025 1.2120
No log 1.1765 20 1.1606 0.0 1.1587
No log 1.2941 22 1.2446 0.0 1.2426
No log 1.4118 24 1.3967 0.0 1.3948
No log 1.5294 26 1.4847 0.0253 1.4827
No log 1.6471 28 1.3244 0.0231 1.3221
No log 1.7647 30 1.0785 0.0360 1.0760
No log 1.8824 32 0.8498 0.1972 0.8470
No log 2.0 34 0.9134 0.1247 0.9107
No log 2.1176 36 1.1156 0.0182 1.1130
No log 2.2353 38 1.0973 0.0 1.0948
No log 2.3529 40 1.0822 0.0182 1.0794
No log 2.4706 42 1.0078 0.0789 1.0047
No log 2.5882 44 1.1175 0.0880 1.1144
No log 2.7059 46 1.3284 0.0465 1.3253
No log 2.8235 48 1.6008 0.0931 1.5976
No log 2.9412 50 2.1079 0.0276 2.1049
No log 3.0588 52 2.2734 0.0412 2.2705
No log 3.1765 54 1.8347 0.1369 1.8320
No log 3.2941 56 1.3122 0.0884 1.3096
No log 3.4118 58 1.2595 0.0664 1.2569
No log 3.5294 60 1.4282 0.1006 1.4256
No log 3.6471 62 1.5462 0.1413 1.5436
No log 3.7647 64 1.3829 0.0593 1.3803
No log 3.8824 66 0.9939 0.1230 0.9914
No log 4.0 68 0.8054 0.1788 0.8030
No log 4.1176 70 0.7868 0.1816 0.7845
No log 4.2353 72 0.9015 0.1607 0.8991
No log 4.3529 74 1.2405 0.0483 1.2379
No log 4.4706 76 1.4275 0.0040 1.4247
No log 4.5882 78 1.3885 0.0201 1.3856
No log 4.7059 80 1.2664 0.0431 1.2634
No log 4.8235 82 1.0916 0.0750 1.0885
No log 4.9412 84 1.0885 0.0159 1.0854
No log 5.0588 86 1.1736 0.0869 1.1705
No log 5.1765 88 1.3914 0.1105 1.3885
No log 5.2941 90 1.5037 0.0839 1.5010
No log 5.4118 92 1.4052 0.0391 1.4026
No log 5.5294 94 1.2441 0.0662 1.2417
No log 5.6471 96 1.1952 0.0646 1.1929
No log 5.7647 98 1.1442 0.0814 1.1419
No log 5.8824 100 1.1950 0.0855 1.1926
No log 6.0 102 1.3191 0.0880 1.3167
No log 6.1176 104 1.4094 0.0999 1.4070
No log 6.2353 106 1.2755 0.1507 1.2730
No log 6.3529 108 1.0797 0.1262 1.0770
No log 6.4706 110 0.9830 0.1381 0.9804
No log 6.5882 112 0.9647 0.1405 0.9620
No log 6.7059 114 1.0370 0.1146 1.0345
No log 6.8235 116 1.1227 0.0885 1.1203
No log 6.9412 118 1.2057 0.1026 1.2033
No log 7.0588 120 1.2261 0.0518 1.2238
No log 7.1765 122 1.2047 0.0652 1.2023
No log 7.2941 124 1.0652 0.1195 1.0628
No log 7.4118 126 0.9603 0.1804 0.9578
No log 7.5294 128 0.9421 0.1812 0.9395
No log 7.6471 130 1.0068 0.1111 1.0041
No log 7.7647 132 1.1676 0.1367 1.1650
No log 7.8824 134 1.3081 0.1382 1.3055
No log 8.0 136 1.3954 0.1123 1.3929
No log 8.1176 138 1.3854 0.1123 1.3829
No log 8.2353 140 1.2987 0.1203 1.2962
No log 8.3529 142 1.1429 0.1190 1.1405
No log 8.4706 144 1.0515 0.1232 1.0491
No log 8.5882 146 1.0416 0.1371 1.0392
No log 8.7059 148 1.0625 0.1438 1.0601
No log 8.8235 150 1.0904 0.1152 1.0880
No log 8.9412 152 1.1358 0.1190 1.1334
No log 9.0588 154 1.1786 0.1043 1.1762
No log 9.1765 156 1.1978 0.1043 1.1954
No log 9.2941 158 1.1835 0.1043 1.1810
No log 9.4118 160 1.1423 0.1190 1.1399
No log 9.5294 162 1.1135 0.1336 1.1110
No log 9.6471 164 1.1006 0.1132 1.0981
No log 9.7647 166 1.0926 0.1277 1.0902
No log 9.8824 168 1.0805 0.1211 1.0780
No log 10.0 170 1.0752 0.1211 1.0727

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
4
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_organization_task6_fold2

Finetuned
(296)
this model