Edit model card

arabert_cross_organization_task4_fold4

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5141
  • Qwk: 0.7810
  • Mse: 0.5141

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.1111 2 1.8597 0.1122 1.8597
No log 0.2222 4 1.3020 0.0409 1.3020
No log 0.3333 6 1.0459 0.3503 1.0459
No log 0.4444 8 0.7138 0.4906 0.7138
No log 0.5556 10 0.5900 0.6224 0.5900
No log 0.6667 12 0.5708 0.6214 0.5708
No log 0.7778 14 0.4881 0.6661 0.4881
No log 0.8889 16 0.4843 0.6880 0.4843
No log 1.0 18 0.5772 0.7185 0.5772
No log 1.1111 20 0.4344 0.7515 0.4344
No log 1.2222 22 0.4173 0.6853 0.4173
No log 1.3333 24 0.4326 0.7470 0.4326
No log 1.4444 26 0.6327 0.7329 0.6327
No log 1.5556 28 0.6486 0.7640 0.6486
No log 1.6667 30 0.4510 0.7633 0.4510
No log 1.7778 32 0.3885 0.7569 0.3885
No log 1.8889 34 0.4049 0.7722 0.4049
No log 2.0 36 0.5466 0.7900 0.5466
No log 2.1111 38 0.5445 0.7886 0.5445
No log 2.2222 40 0.4445 0.7553 0.4445
No log 2.3333 42 0.4182 0.7437 0.4182
No log 2.4444 44 0.4202 0.7536 0.4202
No log 2.5556 46 0.5364 0.7929 0.5364
No log 2.6667 48 0.6070 0.7880 0.6070
No log 2.7778 50 0.4960 0.7859 0.4960
No log 2.8889 52 0.4044 0.7719 0.4044
No log 3.0 54 0.3938 0.7606 0.3938
No log 3.1111 56 0.4669 0.7947 0.4669
No log 3.2222 58 0.5343 0.7820 0.5343
No log 3.3333 60 0.4763 0.7853 0.4763
No log 3.4444 62 0.4091 0.7835 0.4091
No log 3.5556 64 0.4119 0.7882 0.4119
No log 3.6667 66 0.4525 0.7813 0.4525
No log 3.7778 68 0.4761 0.7828 0.4761
No log 3.8889 70 0.4893 0.7931 0.4893
No log 4.0 72 0.4435 0.7862 0.4435
No log 4.1111 74 0.4754 0.7918 0.4754
No log 4.2222 76 0.5004 0.7931 0.5004
No log 4.3333 78 0.5554 0.8090 0.5554
No log 4.4444 80 0.5319 0.7947 0.5319
No log 4.5556 82 0.4459 0.7781 0.4459
No log 4.6667 84 0.4355 0.7725 0.4355
No log 4.7778 86 0.4699 0.7823 0.4699
No log 4.8889 88 0.4860 0.7900 0.4860
No log 5.0 90 0.4400 0.7892 0.4400
No log 5.1111 92 0.4221 0.7376 0.4221
No log 5.2222 94 0.4264 0.7879 0.4264
No log 5.3333 96 0.4728 0.8106 0.4728
No log 5.4444 98 0.5254 0.8043 0.5254
No log 5.5556 100 0.4876 0.8048 0.4876
No log 5.6667 102 0.4400 0.7767 0.4400
No log 5.7778 104 0.4191 0.7487 0.4191
No log 5.8889 106 0.4281 0.7643 0.4281
No log 6.0 108 0.4819 0.7888 0.4819
No log 6.1111 110 0.5487 0.8063 0.5487
No log 6.2222 112 0.6060 0.7903 0.6060
No log 6.3333 114 0.5618 0.7847 0.5618
No log 6.4444 116 0.5080 0.7689 0.5080
No log 6.5556 118 0.4883 0.7543 0.4883
No log 6.6667 120 0.4979 0.7597 0.4979
No log 6.7778 122 0.5155 0.7757 0.5155
No log 6.8889 124 0.5239 0.7883 0.5239
No log 7.0 126 0.5025 0.7973 0.5025
No log 7.1111 128 0.4784 0.7894 0.4784
No log 7.2222 130 0.4608 0.7714 0.4608
No log 7.3333 132 0.4592 0.7608 0.4592
No log 7.4444 134 0.4736 0.7898 0.4736
No log 7.5556 136 0.5099 0.7905 0.5099
No log 7.6667 138 0.5575 0.8010 0.5575
No log 7.7778 140 0.5556 0.8167 0.5556
No log 7.8889 142 0.5181 0.7957 0.5181
No log 8.0 144 0.4691 0.7885 0.4691
No log 8.1111 146 0.4424 0.7890 0.4424
No log 8.2222 148 0.4411 0.7803 0.4411
No log 8.3333 150 0.4646 0.7859 0.4646
No log 8.4444 152 0.4939 0.8070 0.4939
No log 8.5556 154 0.5186 0.8172 0.5186
No log 8.6667 156 0.5307 0.8195 0.5307
No log 8.7778 158 0.5184 0.8146 0.5184
No log 8.8889 160 0.4898 0.8099 0.4898
No log 9.0 162 0.4741 0.7957 0.4741
No log 9.1111 164 0.4724 0.7948 0.4724
No log 9.2222 166 0.4828 0.7937 0.4828
No log 9.3333 168 0.4861 0.7937 0.4861
No log 9.4444 170 0.4940 0.7848 0.4940
No log 9.5556 172 0.5042 0.7810 0.5042
No log 9.6667 174 0.5129 0.7810 0.5129
No log 9.7778 176 0.5142 0.7810 0.5142
No log 9.8889 178 0.5135 0.7810 0.5135
No log 10.0 180 0.5141 0.7810 0.5141

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_organization_task4_fold4

Finetuned
(688)
this model