salbatarni's picture
End of training
faa8628 verified
|
raw
history blame
No virus
3.31 kB
metadata
base_model: aubmindlab/bert-base-arabertv02
tags:
  - generated_from_trainer
model-index:
  - name: arabert_cross_relevance_task1_fold0
    results: []

arabert_cross_relevance_task1_fold0

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2406
  • Qwk: 0.0225
  • Mse: 0.2406

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0351 2 0.5970 0.0065 0.5970
No log 0.0702 4 0.1673 0.0060 0.1673
No log 0.1053 6 0.3299 0.0278 0.3299
No log 0.1404 8 0.7352 0.0065 0.7352
No log 0.1754 10 0.6501 0.0208 0.6501
No log 0.2105 12 0.4790 0.0347 0.4790
No log 0.2456 14 0.3876 0.0363 0.3876
No log 0.2807 16 0.4097 0.0294 0.4097
No log 0.3158 18 0.3925 0.0277 0.3925
No log 0.3509 20 0.3541 0.0310 0.3541
No log 0.3860 22 0.3565 0.0461 0.3565
No log 0.4211 24 0.3341 0.0410 0.3341
No log 0.4561 26 0.3266 0.0480 0.3266
No log 0.4912 28 0.3191 0.0381 0.3191
No log 0.5263 30 0.3104 0.0271 0.3104
No log 0.5614 32 0.3215 0.0271 0.3215
No log 0.5965 34 0.3010 0.0194 0.3010
No log 0.6316 36 0.3055 0.0194 0.3055
No log 0.6667 38 0.3447 0.0254 0.3447
No log 0.7018 40 0.3547 0.0399 0.3547
No log 0.7368 42 0.3642 0.0399 0.3642
No log 0.7719 44 0.3510 0.0327 0.3510
No log 0.8070 46 0.3201 0.0178 0.3201
No log 0.8421 48 0.2882 0.0194 0.2882
No log 0.8772 50 0.2672 0.0209 0.2672
No log 0.9123 52 0.2539 0.0209 0.2539
No log 0.9474 54 0.2455 0.0225 0.2455
No log 0.9825 56 0.2406 0.0225 0.2406

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1