Edit model card

arabert_cross_vocabulary_task5_fold0

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7184
  • Qwk: 0.5361
  • Mse: 0.7182

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0351 2 6.4993 -0.0053 6.4959
No log 0.0702 4 4.3274 0.0546 4.3230
No log 0.1053 6 2.0332 0.1163 2.0309
No log 0.1404 8 1.3322 0.1284 1.3306
No log 0.1754 10 0.9326 0.2426 0.9313
No log 0.2105 12 0.9842 0.2847 0.9820
No log 0.2456 14 1.2941 0.2170 1.2916
No log 0.2807 16 1.0180 0.3371 1.0161
No log 0.3158 18 0.7295 0.4967 0.7284
No log 0.3509 20 0.6802 0.5540 0.6795
No log 0.3860 22 0.7502 0.5261 0.7494
No log 0.4211 24 0.7871 0.5160 0.7862
No log 0.4561 26 0.7535 0.5296 0.7529
No log 0.4912 28 0.7088 0.5404 0.7085
No log 0.5263 30 0.7579 0.4308 0.7577
No log 0.5614 32 0.7816 0.4014 0.7814
No log 0.5965 34 0.7506 0.4588 0.7504
No log 0.6316 36 0.7024 0.5120 0.7022
No log 0.6667 38 0.6632 0.5751 0.6631
No log 0.7018 40 0.6681 0.5840 0.6681
No log 0.7368 42 0.6746 0.5890 0.6745
No log 0.7719 44 0.6782 0.5883 0.6781
No log 0.8070 46 0.6889 0.5642 0.6887
No log 0.8421 48 0.7117 0.5524 0.7115
No log 0.8772 50 0.7291 0.5388 0.7288
No log 0.9123 52 0.7256 0.5334 0.7253
No log 0.9474 54 0.7224 0.5334 0.7222
No log 0.9825 56 0.7184 0.5361 0.7182

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
4
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_vocabulary_task5_fold0

Finetuned
(296)
this model