Edit model card

arabert_cross_vocabulary_task1_fold0

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9563
  • Qwk: 0.3128
  • Mse: 0.9563

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.1333 2 4.7632 0.0048 4.7632
No log 0.2667 4 2.1783 0.0291 2.1783
No log 0.4 6 1.1081 0.1198 1.1081
No log 0.5333 8 1.2015 0.1712 1.2015
No log 0.6667 10 1.9526 0.1246 1.9526
No log 0.8 12 1.0676 0.2290 1.0676
No log 0.9333 14 0.5840 0.4151 0.5840
No log 1.0667 16 0.5753 0.4082 0.5753
No log 1.2 18 0.8160 0.2285 0.8160
No log 1.3333 20 1.0681 0.2071 1.0681
No log 1.4667 22 0.9546 0.2910 0.9546
No log 1.6 24 0.8621 0.3677 0.8621
No log 1.7333 26 0.9054 0.3700 0.9054
No log 1.8667 28 0.9001 0.3161 0.9001
No log 2.0 30 0.7765 0.3323 0.7765
No log 2.1333 32 0.8465 0.2482 0.8465
No log 2.2667 34 0.7701 0.2998 0.7701
No log 2.4 36 0.6622 0.3947 0.6622
No log 2.5333 38 0.8242 0.3282 0.8242
No log 2.6667 40 1.1963 0.2592 1.1963
No log 2.8 42 1.0937 0.2903 1.0937
No log 2.9333 44 0.7877 0.3934 0.7877
No log 3.0667 46 0.5972 0.4479 0.5972
No log 3.2 48 0.6227 0.4269 0.6227
No log 3.3333 50 0.8396 0.3376 0.8396
No log 3.4667 52 1.0463 0.2697 1.0463
No log 3.6 54 0.8738 0.3001 0.8738
No log 3.7333 56 0.6019 0.4289 0.6019
No log 3.8667 58 0.5198 0.4722 0.5198
No log 4.0 60 0.5541 0.4554 0.5541
No log 4.1333 62 0.7597 0.3733 0.7597
No log 4.2667 64 0.9356 0.3432 0.9356
No log 4.4 66 0.8464 0.3610 0.8464
No log 4.5333 68 0.7096 0.3687 0.7096
No log 4.6667 70 0.7102 0.3574 0.7102
No log 4.8 72 0.7078 0.3669 0.7078
No log 4.9333 74 0.8143 0.3480 0.8143
No log 5.0667 76 0.9307 0.3211 0.9307
No log 5.2 78 0.9263 0.3242 0.9263
No log 5.3333 80 0.7661 0.3610 0.7661
No log 5.4667 82 0.6978 0.3853 0.6978
No log 5.6 84 0.8151 0.3739 0.8151
No log 5.7333 86 0.8609 0.3869 0.8609
No log 5.8667 88 0.7966 0.3804 0.7966
No log 6.0 90 0.7527 0.3801 0.7527
No log 6.1333 92 0.7607 0.3861 0.7607
No log 6.2667 94 0.8652 0.3306 0.8652
No log 6.4 96 0.9460 0.3135 0.9460
No log 6.5333 98 1.0831 0.2779 1.0831
No log 6.6667 100 1.0697 0.2892 1.0697
No log 6.8 102 0.9442 0.3343 0.9442
No log 6.9333 104 1.0589 0.2994 1.0589
No log 7.0667 106 1.1776 0.2674 1.1776
No log 7.2 108 1.1644 0.2696 1.1644
No log 7.3333 110 0.9516 0.3314 0.9516
No log 7.4667 112 0.8591 0.3636 0.8591
No log 7.6 114 0.9364 0.3335 0.9364
No log 7.7333 116 0.9971 0.3111 0.9971
No log 7.8667 118 0.9728 0.3155 0.9728
No log 8.0 120 0.8721 0.3499 0.8721
No log 8.1333 122 0.8160 0.3628 0.8160
No log 8.2667 124 0.8194 0.3688 0.8194
No log 8.4 126 0.8317 0.3748 0.8317
No log 8.5333 128 0.8909 0.3437 0.8909
No log 8.6667 130 0.9882 0.3225 0.9882
No log 8.8 132 1.0839 0.2850 1.0839
No log 8.9333 134 1.1055 0.2859 1.1055
No log 9.0667 136 1.1389 0.2859 1.1389
No log 9.2 138 1.1417 0.2795 1.1417
No log 9.3333 140 1.0801 0.2923 1.0801
No log 9.4667 142 1.0176 0.3061 1.0176
No log 9.6 144 0.9751 0.3091 0.9751
No log 9.7333 146 0.9550 0.3128 0.9550
No log 9.8667 148 0.9569 0.3128 0.9569
No log 10.0 150 0.9563 0.3128 0.9563

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
5
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_vocabulary_task1_fold0

Finetuned
(298)
this model