Edit model card

arabert_cross_organization_task2_fold4

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4343
  • Qwk: 0.7442
  • Mse: 0.4343

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.125 2 3.0559 0.0132 3.0559
No log 0.25 4 1.5018 0.1018 1.5018
No log 0.375 6 0.9403 0.3496 0.9403
No log 0.5 8 1.0512 0.4227 1.0512
No log 0.625 10 0.7038 0.5121 0.7038
No log 0.75 12 0.7727 0.4566 0.7727
No log 0.875 14 0.5989 0.5213 0.5989
No log 1.0 16 0.6428 0.6397 0.6428
No log 1.125 18 0.6331 0.7095 0.6331
No log 1.25 20 0.5024 0.6478 0.5024
No log 1.375 22 0.7624 0.4770 0.7624
No log 1.5 24 0.5793 0.5485 0.5793
No log 1.625 26 0.4887 0.6546 0.4887
No log 1.75 28 0.5482 0.6854 0.5482
No log 1.875 30 0.5328 0.7446 0.5328
No log 2.0 32 0.4476 0.6785 0.4476
No log 2.125 34 0.5184 0.5745 0.5184
No log 2.25 36 0.4772 0.6201 0.4772
No log 2.375 38 0.4229 0.7095 0.4229
No log 2.5 40 0.4747 0.7500 0.4747
No log 2.625 42 0.4556 0.7201 0.4556
No log 2.75 44 0.4703 0.6407 0.4703
No log 2.875 46 0.4875 0.6566 0.4875
No log 3.0 48 0.5070 0.7290 0.5070
No log 3.125 50 0.4950 0.7764 0.4950
No log 3.25 52 0.4192 0.7444 0.4192
No log 3.375 54 0.4132 0.6919 0.4132
No log 3.5 56 0.4024 0.7128 0.4024
No log 3.625 58 0.4094 0.7451 0.4094
No log 3.75 60 0.4675 0.7828 0.4675
No log 3.875 62 0.4559 0.7636 0.4559
No log 4.0 64 0.4150 0.7449 0.4150
No log 4.125 66 0.3994 0.7551 0.3994
No log 4.25 68 0.3872 0.7513 0.3872
No log 4.375 70 0.3951 0.7719 0.3951
No log 4.5 72 0.4536 0.7801 0.4536
No log 4.625 74 0.4695 0.7891 0.4695
No log 4.75 76 0.4253 0.7787 0.4253
No log 4.875 78 0.3967 0.7809 0.3967
No log 5.0 80 0.3954 0.7506 0.3954
No log 5.125 82 0.4062 0.7844 0.4062
No log 5.25 84 0.4096 0.7688 0.4096
No log 5.375 86 0.4305 0.7167 0.4305
No log 5.5 88 0.4607 0.6647 0.4607
No log 5.625 90 0.4776 0.6876 0.4776
No log 5.75 92 0.4996 0.7150 0.4996
No log 5.875 94 0.5241 0.7677 0.5241
No log 6.0 96 0.5059 0.7933 0.5059
No log 6.125 98 0.4470 0.7830 0.4470
No log 6.25 100 0.4010 0.7665 0.4010
No log 6.375 102 0.4147 0.6921 0.4147
No log 6.5 104 0.4226 0.6845 0.4226
No log 6.625 106 0.4193 0.7197 0.4193
No log 6.75 108 0.4395 0.7571 0.4395
No log 6.875 110 0.4602 0.7536 0.4602
No log 7.0 112 0.4569 0.7332 0.4569
No log 7.125 114 0.4359 0.7109 0.4359
No log 7.25 116 0.4245 0.7097 0.4245
No log 7.375 118 0.4142 0.7397 0.4142
No log 7.5 120 0.4102 0.7558 0.4102
No log 7.625 122 0.4179 0.7845 0.4179
No log 7.75 124 0.4170 0.7876 0.4170
No log 7.875 126 0.4173 0.7876 0.4173
No log 8.0 128 0.4157 0.7629 0.4157
No log 8.125 130 0.4165 0.7617 0.4165
No log 8.25 132 0.4198 0.7551 0.4198
No log 8.375 134 0.4256 0.7560 0.4256
No log 8.5 136 0.4285 0.7405 0.4285
No log 8.625 138 0.4320 0.7413 0.4320
No log 8.75 140 0.4361 0.7522 0.4361
No log 8.875 142 0.4387 0.7512 0.4387
No log 9.0 144 0.4377 0.7502 0.4377
No log 9.125 146 0.4356 0.7447 0.4356
No log 9.25 148 0.4354 0.7421 0.4354
No log 9.375 150 0.4355 0.7421 0.4355
No log 9.5 152 0.4365 0.7421 0.4365
No log 9.625 154 0.4364 0.7421 0.4364
No log 9.75 156 0.4354 0.7509 0.4354
No log 9.875 158 0.4347 0.7442 0.4347
No log 10.0 160 0.4343 0.7442 0.4343

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
4
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_organization_task2_fold4

Finetuned
(687)
this model