Edit model card

arabert_baseline_relevance_task1_fold1

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0669
  • Qwk: 0.0233
  • Mse: 0.0682

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.3333 2 0.9611 -0.0186 0.9691
No log 0.6667 4 0.2075 -0.0851 0.2124
No log 1.0 6 0.1656 0.0207 0.1701
No log 1.3333 8 0.1145 0.0392 0.1165
No log 1.6667 10 0.1145 0.0165 0.1165
No log 2.0 12 0.1921 0.0 0.1938
No log 2.3333 14 0.2095 0.0 0.2110
No log 2.6667 16 0.1634 0.0 0.1651
No log 3.0 18 0.0995 0.0 0.1002
No log 3.3333 20 0.1102 0.0050 0.1101
No log 3.6667 22 0.1004 0.0165 0.1002
No log 4.0 24 0.0724 0.0233 0.0728
No log 4.3333 26 0.0523 0.0233 0.0531
No log 4.6667 28 0.0574 0.0233 0.0583
No log 5.0 30 0.0614 0.0233 0.0626
No log 5.3333 32 0.0663 0.0233 0.0676
No log 5.6667 34 0.0729 0.0233 0.0742
No log 6.0 36 0.0631 0.0308 0.0641
No log 6.3333 38 0.0561 0.0597 0.0563
No log 6.6667 40 0.0579 0.0870 0.0579
No log 7.0 42 0.0563 0.0597 0.0566
No log 7.3333 44 0.0596 0.0308 0.0606
No log 7.6667 46 0.0661 0.0308 0.0673
No log 8.0 48 0.0735 0.0050 0.0747
No log 8.3333 50 0.0721 0.0105 0.0733
No log 8.6667 52 0.0679 0.0233 0.0690
No log 9.0 54 0.0657 0.0233 0.0669
No log 9.3333 56 0.0652 0.0233 0.0664
No log 9.6667 58 0.0664 0.0233 0.0677
No log 10.0 60 0.0669 0.0233 0.0682

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
8
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_baseline_relevance_task1_fold1

Finetuned
(702)
this model