Edit model card

arabert_cross_vocabulary_task3_fold0

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6808
  • Qwk: 0.5680
  • Mse: 0.6804

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0323 2 4.4744 -0.0256 4.4706
No log 0.0645 4 2.3045 0.0249 2.3004
No log 0.0968 6 1.2805 0.1391 1.2779
No log 0.1290 8 1.0125 0.1335 1.0109
No log 0.1613 10 1.0701 0.1914 1.0686
No log 0.1935 12 1.2785 0.2037 1.2768
No log 0.2258 14 1.2274 0.2479 1.2257
No log 0.2581 16 1.0548 0.3135 1.0534
No log 0.2903 18 0.9068 0.4200 0.9058
No log 0.3226 20 0.8042 0.4719 0.8035
No log 0.3548 22 0.7409 0.5318 0.7404
No log 0.3871 24 0.7832 0.5356 0.7827
No log 0.4194 26 0.8927 0.5031 0.8924
No log 0.4516 28 1.1506 0.4441 1.1505
No log 0.4839 30 1.4202 0.3909 1.4201
No log 0.5161 32 1.1610 0.4441 1.1608
No log 0.5484 34 0.8093 0.5444 0.8088
No log 0.5806 36 0.6806 0.5981 0.6803
No log 0.6129 38 0.6480 0.5752 0.6477
No log 0.6452 40 0.6549 0.5691 0.6546
No log 0.6774 42 0.7025 0.5317 0.7021
No log 0.7097 44 0.7321 0.5105 0.7317
No log 0.7419 46 0.7334 0.5066 0.7330
No log 0.7742 48 0.7273 0.5221 0.7269
No log 0.8065 50 0.7075 0.5309 0.7071
No log 0.8387 52 0.6855 0.5374 0.6852
No log 0.8710 54 0.6736 0.5635 0.6733
No log 0.9032 56 0.6784 0.5680 0.6781
No log 0.9355 58 0.6858 0.5657 0.6855
No log 0.9677 60 0.6828 0.5669 0.6825
No log 1.0 62 0.6808 0.5680 0.6804

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_vocabulary_task3_fold0

Finetuned
(691)
this model