Edit model card

arabert_cross_vocabulary_task6_fold3

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9261
  • Qwk: 0.0
  • Mse: 0.9261

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0290 2 6.5303 0.0019 6.5303
No log 0.0580 4 3.1144 0.0013 3.1144
No log 0.0870 6 1.8229 0.0085 1.8229
No log 0.1159 8 1.2398 0.0 1.2398
No log 0.1449 10 0.7589 0.0496 0.7589
No log 0.1739 12 0.7859 0.0071 0.7859
No log 0.2029 14 0.8223 -0.0684 0.8223
No log 0.2319 16 0.8121 -0.0353 0.8121
No log 0.2609 18 0.7513 -0.0578 0.7513
No log 0.2899 20 0.7698 0.0309 0.7698
No log 0.3188 22 0.7570 -0.0286 0.7570
No log 0.3478 24 0.7494 -0.0743 0.7494
No log 0.3768 26 0.7478 0.0 0.7478
No log 0.4058 28 0.7555 0.0 0.7555
No log 0.4348 30 0.7954 0.0 0.7954
No log 0.4638 32 0.8212 0.0 0.8212
No log 0.4928 34 0.8769 0.0 0.8769
No log 0.5217 36 0.9076 0.0 0.9076
No log 0.5507 38 0.8669 0.0 0.8669
No log 0.5797 40 0.7835 -0.0530 0.7835
No log 0.6087 42 0.8154 -0.0118 0.8154
No log 0.6377 44 1.0142 0.0093 1.0142
No log 0.6667 46 1.0534 -0.0026 1.0534
No log 0.6957 48 0.9329 0.0381 0.9329
No log 0.7246 50 0.8143 0.0225 0.8143
No log 0.7536 52 0.7703 -0.0860 0.7703
No log 0.7826 54 0.7862 0.0 0.7862
No log 0.8116 56 0.8271 0.0 0.8271
No log 0.8406 58 0.8682 0.0 0.8682
No log 0.8696 60 0.9099 0.0 0.9099
No log 0.8986 62 0.9266 0.0 0.9266
No log 0.9275 64 0.9343 0.0 0.9343
No log 0.9565 66 0.9325 0.0 0.9325
No log 0.9855 68 0.9261 0.0 0.9261

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_vocabulary_task6_fold3

Finetuned
(675)
this model