Edit model card

arabert_cross_organization_task4_fold3

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5515
  • Qwk: 0.8285
  • Mse: 0.5515

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.1176 2 1.8640 0.0713 1.8640
No log 0.2353 4 1.2390 0.2961 1.2390
No log 0.3529 6 1.0659 0.5023 1.0659
No log 0.4706 8 0.7195 0.6388 0.7195
No log 0.5882 10 0.6736 0.6636 0.6736
No log 0.7059 12 0.6560 0.7539 0.6560
No log 0.8235 14 0.6088 0.7149 0.6088
No log 0.9412 16 0.5695 0.7426 0.5695
No log 1.0588 18 0.5646 0.7706 0.5646
No log 1.1765 20 0.5616 0.7796 0.5616
No log 1.2941 22 0.5516 0.7811 0.5516
No log 1.4118 24 0.5475 0.7868 0.5475
No log 1.5294 26 0.5246 0.7436 0.5246
No log 1.6471 28 0.5463 0.6820 0.5463
No log 1.7647 30 0.5511 0.7714 0.5511
No log 1.8824 32 0.6634 0.7890 0.6634
No log 2.0 34 0.5815 0.7811 0.5815
No log 2.1176 36 0.5186 0.7320 0.5186
No log 2.2353 38 0.5161 0.7348 0.5161
No log 2.3529 40 0.5145 0.7637 0.5145
No log 2.4706 42 0.5367 0.7805 0.5367
No log 2.5882 44 0.5238 0.7807 0.5238
No log 2.7059 46 0.5169 0.7771 0.5169
No log 2.8235 48 0.5287 0.7852 0.5287
No log 2.9412 50 0.5446 0.7867 0.5446
No log 3.0588 52 0.5892 0.7786 0.5892
No log 3.1765 54 0.5913 0.7818 0.5913
No log 3.2941 56 0.5352 0.7813 0.5352
No log 3.4118 58 0.5197 0.7785 0.5197
No log 3.5294 60 0.5895 0.7951 0.5895
No log 3.6471 62 0.6125 0.7950 0.6125
No log 3.7647 64 0.5768 0.7875 0.5768
No log 3.8824 66 0.5110 0.7717 0.5110
No log 4.0 68 0.5375 0.7910 0.5375
No log 4.1176 70 0.6011 0.7996 0.6011
No log 4.2353 72 0.5525 0.7824 0.5525
No log 4.3529 74 0.5620 0.7907 0.5620
No log 4.4706 76 0.5220 0.7908 0.5220
No log 4.5882 78 0.5028 0.7641 0.5028
No log 4.7059 80 0.5102 0.7841 0.5102
No log 4.8235 82 0.6101 0.8166 0.6101
No log 4.9412 84 0.7637 0.8377 0.7637
No log 5.0588 86 0.6895 0.8271 0.6895
No log 5.1765 88 0.5154 0.7893 0.5154
No log 5.2941 90 0.4808 0.7932 0.4808
No log 5.4118 92 0.5241 0.7847 0.5241
No log 5.5294 94 0.6303 0.8282 0.6303
No log 5.6471 96 0.6026 0.8213 0.6026
No log 5.7647 98 0.5078 0.8217 0.5078
No log 5.8824 100 0.4940 0.8070 0.4940
No log 6.0 102 0.5633 0.8182 0.5633
No log 6.1176 104 0.7172 0.8375 0.7172
No log 6.2353 106 0.7195 0.8305 0.7195
No log 6.3529 108 0.5965 0.8196 0.5965
No log 6.4706 110 0.5114 0.7928 0.5114
No log 6.5882 112 0.5064 0.7886 0.5064
No log 6.7059 114 0.5439 0.8123 0.5439
No log 6.8235 116 0.6118 0.8257 0.6118
No log 6.9412 118 0.6143 0.8179 0.6143
No log 7.0588 120 0.6140 0.8174 0.6140
No log 7.1765 122 0.5766 0.8207 0.5766
No log 7.2941 124 0.5369 0.8188 0.5369
No log 7.4118 126 0.5480 0.8132 0.5480
No log 7.5294 128 0.6078 0.8236 0.6078
No log 7.6471 130 0.6701 0.8235 0.6701
No log 7.7647 132 0.6414 0.8221 0.6414
No log 7.8824 134 0.5864 0.8195 0.5864
No log 8.0 136 0.5473 0.8186 0.5473
No log 8.1176 138 0.5345 0.8188 0.5345
No log 8.2353 140 0.5432 0.8202 0.5432
No log 8.3529 142 0.5420 0.8239 0.5420
No log 8.4706 144 0.5381 0.8188 0.5381
No log 8.5882 146 0.5139 0.8016 0.5139
No log 8.7059 148 0.5052 0.7981 0.5052
No log 8.8235 150 0.5095 0.7948 0.5095
No log 8.9412 152 0.5334 0.8173 0.5334
No log 9.0588 154 0.5783 0.8340 0.5783
No log 9.1765 156 0.6036 0.8366 0.6036
No log 9.2941 158 0.6056 0.8403 0.6056
No log 9.4118 160 0.5911 0.8320 0.5911
No log 9.5294 162 0.5683 0.8301 0.5683
No log 9.6471 164 0.5583 0.8287 0.5583
No log 9.7647 166 0.5530 0.8269 0.5530
No log 9.8824 168 0.5512 0.8285 0.5512
No log 10.0 170 0.5515 0.8285 0.5515

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_organization_task4_fold3

Finetuned
(688)
this model