Edit model card

arabert_baseline_augmented_organization_task1_fold1

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5936
  • Qwk: 0.6341
  • Mse: 0.5936
  • Rmse: 0.7705

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.1333 2 3.0841 0.0302 3.0841 1.7562
No log 0.2667 4 1.4917 -0.0310 1.4917 1.2214
No log 0.4 6 0.8151 0.1901 0.8151 0.9028
No log 0.5333 8 0.7692 0.1355 0.7692 0.8770
No log 0.6667 10 0.6718 0.3913 0.6718 0.8196
No log 0.8 12 0.6380 0.4141 0.6380 0.7987
No log 0.9333 14 0.6826 0.3778 0.6826 0.8262
No log 1.0667 16 0.8820 0.3212 0.8820 0.9391
No log 1.2 18 0.6164 0.4085 0.6164 0.7851
No log 1.3333 20 0.6655 0.5116 0.6655 0.8158
No log 1.4667 22 0.7851 0.5116 0.7851 0.8861
No log 1.6 24 0.6450 0.6354 0.6450 0.8031
No log 1.7333 26 0.5017 0.5581 0.5017 0.7083
No log 1.8667 28 0.9415 0.2524 0.9415 0.9703
No log 2.0 30 0.6734 0.4615 0.6734 0.8206
No log 2.1333 32 0.4050 0.5984 0.4050 0.6364
No log 2.2667 34 0.5873 0.6051 0.5873 0.7663
No log 2.4 36 0.6292 0.6051 0.6292 0.7932
No log 2.5333 38 0.4895 0.6051 0.4895 0.6996
No log 2.6667 40 0.4042 0.4893 0.4042 0.6358
No log 2.8 42 0.4005 0.5543 0.4005 0.6329
No log 2.9333 44 0.4059 0.5977 0.4059 0.6371
No log 3.0667 46 0.4222 0.5468 0.4222 0.6497
No log 3.2 48 0.5120 0.7004 0.5120 0.7155
No log 3.3333 50 0.6828 0.6486 0.6828 0.8263
No log 3.4667 52 0.6311 0.6316 0.6311 0.7944
No log 3.6 54 0.5075 0.5987 0.5075 0.7124
No log 3.7333 56 0.5206 0.5106 0.5206 0.7216
No log 3.8667 58 0.5476 0.5157 0.5476 0.7400
No log 4.0 60 0.4934 0.5205 0.4934 0.7024
No log 4.1333 62 0.5593 0.5751 0.5593 0.7479
No log 4.2667 64 0.8046 0.6883 0.8046 0.8970
No log 4.4 66 0.8289 0.7131 0.8289 0.9104
No log 4.5333 68 0.7470 0.6957 0.7470 0.8643
No log 4.6667 70 0.5733 0.6028 0.5733 0.7572
No log 4.8 72 0.5127 0.5205 0.5127 0.7160
No log 4.9333 74 0.5790 0.5581 0.5790 0.7609
No log 5.0667 76 0.5315 0.5772 0.5315 0.7290
No log 5.2 78 0.4861 0.5611 0.4861 0.6972
No log 5.3333 80 0.6214 0.6172 0.6214 0.7883
No log 5.4667 82 0.7550 0.7131 0.7550 0.8689
No log 5.6 84 0.7349 0.7131 0.7349 0.8572
No log 5.7333 86 0.6173 0.6056 0.6173 0.7857
No log 5.8667 88 0.5064 0.6209 0.5064 0.7116
No log 6.0 90 0.5083 0.5882 0.5083 0.7129
No log 6.1333 92 0.5456 0.5581 0.5456 0.7386
No log 6.2667 94 0.5064 0.5882 0.5064 0.7116
No log 6.4 96 0.4834 0.5576 0.4834 0.6953
No log 6.5333 98 0.5811 0.6341 0.5811 0.7623
No log 6.6667 100 0.6637 0.6818 0.6637 0.8147
No log 6.8 102 0.6741 0.6067 0.6741 0.8210
No log 6.9333 104 0.6070 0.6067 0.6070 0.7791
No log 7.0667 106 0.5169 0.6525 0.5169 0.7190
No log 7.2 108 0.4812 0.6465 0.4812 0.6937
No log 7.3333 110 0.4903 0.6879 0.4903 0.7002
No log 7.4667 112 0.5035 0.6405 0.5035 0.7096
No log 7.6 114 0.5003 0.6879 0.5003 0.7073
No log 7.7333 116 0.5243 0.7016 0.5243 0.7241
No log 7.8667 118 0.5949 0.6776 0.5949 0.7713
No log 8.0 120 0.6398 0.6341 0.6398 0.7999
No log 8.1333 122 0.6353 0.6341 0.6353 0.7970
No log 8.2667 124 0.6012 0.6341 0.6012 0.7754
No log 8.4 126 0.5564 0.6957 0.5564 0.7459
No log 8.5333 128 0.5379 0.6957 0.5379 0.7334
No log 8.6667 130 0.5384 0.6525 0.5384 0.7338
No log 8.8 132 0.5411 0.6525 0.5411 0.7356
No log 8.9333 134 0.5647 0.6449 0.5647 0.7514
No log 9.0667 136 0.5841 0.6341 0.5841 0.7643
No log 9.2 138 0.5972 0.6341 0.5972 0.7728
No log 9.3333 140 0.5998 0.6341 0.5998 0.7745
No log 9.4667 142 0.5996 0.6341 0.5996 0.7743
No log 9.6 144 0.6022 0.6341 0.6022 0.7760
No log 9.7333 146 0.5983 0.6341 0.5983 0.7735
No log 9.8667 148 0.5937 0.6341 0.5937 0.7705
No log 10.0 150 0.5936 0.6341 0.5936 0.7705

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
135M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for MayBashendy/arabert_baseline_augmented_organization_task1_fold1

Finetuned
this model