arabert_cross_development_task1_fold3
This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.5861
- Qwk: 0.7634
- Mse: 0.5861
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
---|---|---|---|---|---|
No log | 0.1333 | 2 | 1.7502 | 0.1271 | 1.7502 |
No log | 0.2667 | 4 | 1.2292 | 0.1987 | 1.2292 |
No log | 0.4 | 6 | 1.3080 | 0.3555 | 1.3080 |
No log | 0.5333 | 8 | 0.9607 | 0.4976 | 0.9607 |
No log | 0.6667 | 10 | 0.8578 | 0.6328 | 0.8578 |
No log | 0.8 | 12 | 0.8243 | 0.6683 | 0.8243 |
No log | 0.9333 | 14 | 0.6736 | 0.6082 | 0.6736 |
No log | 1.0667 | 16 | 0.6841 | 0.5941 | 0.6841 |
No log | 1.2 | 18 | 0.6510 | 0.6300 | 0.6510 |
No log | 1.3333 | 20 | 0.7744 | 0.7345 | 0.7744 |
No log | 1.4667 | 22 | 0.6786 | 0.7336 | 0.6786 |
No log | 1.6 | 24 | 0.5769 | 0.6663 | 0.5769 |
No log | 1.7333 | 26 | 0.5647 | 0.6751 | 0.5647 |
No log | 1.8667 | 28 | 0.6191 | 0.7228 | 0.6191 |
No log | 2.0 | 30 | 0.6480 | 0.7149 | 0.6480 |
No log | 2.1333 | 32 | 0.5930 | 0.6377 | 0.5930 |
No log | 2.2667 | 34 | 0.5792 | 0.6840 | 0.5792 |
No log | 2.4 | 36 | 0.6399 | 0.7684 | 0.6399 |
No log | 2.5333 | 38 | 0.6099 | 0.7730 | 0.6099 |
No log | 2.6667 | 40 | 0.5513 | 0.7336 | 0.5513 |
No log | 2.8 | 42 | 0.5787 | 0.7674 | 0.5787 |
No log | 2.9333 | 44 | 0.6353 | 0.7926 | 0.6353 |
No log | 3.0667 | 46 | 0.5670 | 0.7594 | 0.5670 |
No log | 3.2 | 48 | 0.6004 | 0.7827 | 0.6004 |
No log | 3.3333 | 50 | 0.6263 | 0.7869 | 0.6263 |
No log | 3.4667 | 52 | 0.5762 | 0.7498 | 0.5762 |
No log | 3.6 | 54 | 0.5570 | 0.7472 | 0.5570 |
No log | 3.7333 | 56 | 0.6280 | 0.7790 | 0.6280 |
No log | 3.8667 | 58 | 0.6056 | 0.7764 | 0.6056 |
No log | 4.0 | 60 | 0.5239 | 0.7299 | 0.5239 |
No log | 4.1333 | 62 | 0.5192 | 0.7305 | 0.5192 |
No log | 4.2667 | 64 | 0.5703 | 0.7631 | 0.5703 |
No log | 4.4 | 66 | 0.6518 | 0.7918 | 0.6518 |
No log | 4.5333 | 68 | 0.7302 | 0.7883 | 0.7302 |
No log | 4.6667 | 70 | 0.6654 | 0.7960 | 0.6654 |
No log | 4.8 | 72 | 0.5714 | 0.7539 | 0.5714 |
No log | 4.9333 | 74 | 0.5149 | 0.7046 | 0.5149 |
No log | 5.0667 | 76 | 0.5129 | 0.6751 | 0.5129 |
No log | 5.2 | 78 | 0.5200 | 0.7257 | 0.5200 |
No log | 5.3333 | 80 | 0.5968 | 0.7701 | 0.5968 |
No log | 5.4667 | 82 | 0.6356 | 0.7899 | 0.6356 |
No log | 5.6 | 84 | 0.5976 | 0.7718 | 0.5976 |
No log | 5.7333 | 86 | 0.5510 | 0.7528 | 0.5510 |
No log | 5.8667 | 88 | 0.5499 | 0.7505 | 0.5499 |
No log | 6.0 | 90 | 0.5581 | 0.7507 | 0.5581 |
No log | 6.1333 | 92 | 0.5846 | 0.7624 | 0.5846 |
No log | 6.2667 | 94 | 0.6247 | 0.7828 | 0.6247 |
No log | 6.4 | 96 | 0.6363 | 0.7865 | 0.6363 |
No log | 6.5333 | 98 | 0.6065 | 0.7792 | 0.6065 |
No log | 6.6667 | 100 | 0.5753 | 0.7552 | 0.5753 |
No log | 6.8 | 102 | 0.5617 | 0.7438 | 0.5617 |
No log | 6.9333 | 104 | 0.5593 | 0.7415 | 0.5593 |
No log | 7.0667 | 106 | 0.5501 | 0.7410 | 0.5501 |
No log | 7.2 | 108 | 0.5736 | 0.7489 | 0.5736 |
No log | 7.3333 | 110 | 0.6235 | 0.7773 | 0.6235 |
No log | 7.4667 | 112 | 0.6392 | 0.7840 | 0.6392 |
No log | 7.6 | 114 | 0.6211 | 0.7732 | 0.6211 |
No log | 7.7333 | 116 | 0.5970 | 0.7733 | 0.5970 |
No log | 7.8667 | 118 | 0.5611 | 0.7530 | 0.5611 |
No log | 8.0 | 120 | 0.5439 | 0.7470 | 0.5439 |
No log | 8.1333 | 122 | 0.5497 | 0.7484 | 0.5497 |
No log | 8.2667 | 124 | 0.5836 | 0.7580 | 0.5836 |
No log | 8.4 | 126 | 0.6389 | 0.7749 | 0.6389 |
No log | 8.5333 | 128 | 0.6778 | 0.7817 | 0.6778 |
No log | 8.6667 | 130 | 0.6794 | 0.7790 | 0.6794 |
No log | 8.8 | 132 | 0.6575 | 0.7769 | 0.6575 |
No log | 8.9333 | 134 | 0.6182 | 0.7797 | 0.6182 |
No log | 9.0667 | 136 | 0.5921 | 0.7689 | 0.5921 |
No log | 9.2 | 138 | 0.5771 | 0.7588 | 0.5771 |
No log | 9.3333 | 140 | 0.5657 | 0.7530 | 0.5657 |
No log | 9.4667 | 142 | 0.5672 | 0.7530 | 0.5672 |
No log | 9.6 | 144 | 0.5759 | 0.7639 | 0.5759 |
No log | 9.7333 | 146 | 0.5809 | 0.7611 | 0.5809 |
No log | 9.8667 | 148 | 0.5847 | 0.7611 | 0.5847 |
No log | 10.0 | 150 | 0.5861 | 0.7634 | 0.5861 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 8
Model tree for salbatarni/arabert_cross_development_task1_fold3
Base model
aubmindlab/bert-base-arabertv02