Edit model card

arabert_cross_vocabulary_task6_fold4

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9017
  • Qwk: 0.7854
  • Mse: 0.9017

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0323 2 3.2692 0.0190 3.2692
No log 0.0645 4 2.2061 0.1047 2.2061
No log 0.0968 6 1.4731 0.2850 1.4731
No log 0.1290 8 1.4939 0.3745 1.4939
No log 0.1613 10 1.4323 0.5432 1.4323
No log 0.1935 12 1.1650 0.5658 1.1650
No log 0.2258 14 1.2574 0.6380 1.2574
No log 0.2581 16 1.5764 0.6494 1.5764
No log 0.2903 18 1.5312 0.6719 1.5312
No log 0.3226 20 1.0794 0.7533 1.0794
No log 0.3548 22 0.7210 0.7812 0.7210
No log 0.3871 24 0.6686 0.7619 0.6686
No log 0.4194 26 0.7673 0.8035 0.7673
No log 0.4516 28 0.9964 0.7709 0.9964
No log 0.4839 30 1.0717 0.7698 1.0717
No log 0.5161 32 1.0503 0.7731 1.0503
No log 0.5484 34 0.9077 0.7987 0.9077
No log 0.5806 36 0.7545 0.8028 0.7545
No log 0.6129 38 0.7792 0.8013 0.7792
No log 0.6452 40 0.7860 0.7981 0.7860
No log 0.6774 42 0.7492 0.7988 0.7492
No log 0.7097 44 0.7053 0.8126 0.7053
No log 0.7419 46 0.7612 0.8056 0.7612
No log 0.7742 48 0.8008 0.7851 0.8008
No log 0.8065 50 0.8570 0.7727 0.8570
No log 0.8387 52 0.8902 0.7709 0.8902
No log 0.8710 54 0.9179 0.7780 0.9179
No log 0.9032 56 0.9185 0.7758 0.9185
No log 0.9355 58 0.9139 0.7854 0.9139
No log 0.9677 60 0.9045 0.7854 0.9045
No log 1.0 62 0.9017 0.7854 0.9017

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
3
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_vocabulary_task6_fold4

Finetuned
(296)
this model