salbatarni's picture
Training in progress, step 60
c70c031 verified
|
raw
history blame
3.31 kB
metadata
base_model: aubmindlab/bert-base-arabertv02
tags:
  - generated_from_trainer
model-index:
  - name: arabert_cross_relevance_task1_fold5
    results: []

arabert_cross_relevance_task1_fold5

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1955
  • Qwk: 0.3846
  • Mse: 0.1955

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0351 2 0.3613 0.2826 0.3613
No log 0.0702 4 0.4116 0.5484 0.4116
No log 0.1053 6 0.2771 0.4880 0.2771
No log 0.1404 8 0.2157 0.3346 0.2157
No log 0.1754 10 0.2149 0.3324 0.2149
No log 0.2105 12 0.2119 0.3302 0.2119
No log 0.2456 14 0.2146 0.3336 0.2146
No log 0.2807 16 0.2085 0.3319 0.2085
No log 0.3158 18 0.2118 0.3641 0.2118
No log 0.3509 20 0.2261 0.4439 0.2261
No log 0.3860 22 0.2392 0.4713 0.2392
No log 0.4211 24 0.2344 0.4095 0.2344
No log 0.4561 26 0.2151 0.3602 0.2151
No log 0.4912 28 0.2029 0.3304 0.2029
No log 0.5263 30 0.2041 0.3227 0.2041
No log 0.5614 32 0.2085 0.3015 0.2085
No log 0.5965 34 0.2114 0.2998 0.2114
No log 0.6316 36 0.2114 0.2998 0.2114
No log 0.6667 38 0.2041 0.3156 0.2041
No log 0.7018 40 0.1985 0.3186 0.1985
No log 0.7368 42 0.1948 0.3490 0.1948
No log 0.7719 44 0.1925 0.3543 0.1925
No log 0.8070 46 0.1939 0.3667 0.1939
No log 0.8421 48 0.1928 0.3684 0.1928
No log 0.8772 50 0.1932 0.3736 0.1932
No log 0.9123 52 0.1942 0.3795 0.1942
No log 0.9474 54 0.1952 0.3846 0.1952
No log 0.9825 56 0.1955 0.3846 0.1955

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1