Edit model card

arabert_cross_vocabulary_task7_fold1

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8627
  • Qwk: 0.3143
  • Mse: 0.8627

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0351 2 3.8189 0.0193 3.8189
No log 0.0702 4 1.7985 0.0716 1.7985
No log 0.1053 6 1.6094 0.1126 1.6094
No log 0.1404 8 1.2553 0.1291 1.2553
No log 0.1754 10 0.8065 0.1674 0.8065
No log 0.2105 12 0.7741 0.2489 0.7741
No log 0.2456 14 1.1045 0.2614 1.1045
No log 0.2807 16 1.1978 0.2692 1.1978
No log 0.3158 18 1.0983 0.2938 1.0983
No log 0.3509 20 1.0710 0.3026 1.0710
No log 0.3860 22 0.6682 0.3670 0.6682
No log 0.4211 24 0.5966 0.4097 0.5966
No log 0.4561 26 0.6833 0.3683 0.6833
No log 0.4912 28 0.9987 0.2985 0.9987
No log 0.5263 30 1.2852 0.2523 1.2852
No log 0.5614 32 1.0778 0.2737 1.0778
No log 0.5965 34 0.7230 0.3317 0.7230
No log 0.6316 36 0.6062 0.4082 0.6062
No log 0.6667 38 0.5822 0.4172 0.5822
No log 0.7018 40 0.6112 0.4001 0.6112
No log 0.7368 42 0.6913 0.3542 0.6913
No log 0.7719 44 0.8003 0.3063 0.8003
No log 0.8070 46 0.8430 0.3051 0.8430
No log 0.8421 48 0.8930 0.3029 0.8930
No log 0.8772 50 0.8887 0.3031 0.8887
No log 0.9123 52 0.8824 0.2971 0.8824
No log 0.9474 54 0.8755 0.2971 0.8755
No log 0.9825 56 0.8627 0.3143 0.8627

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
3
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_vocabulary_task7_fold1

Finetuned
(296)
this model