arabert_cross_vocabulary_task5_fold1
This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.9484
- Qwk: 0.2440
- Mse: 0.9484
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
---|---|---|---|---|---|
No log | 0.0351 | 2 | 4.0285 | -0.0026 | 4.0285 |
No log | 0.0702 | 4 | 1.9411 | 0.0411 | 1.9411 |
No log | 0.1053 | 6 | 1.1570 | 0.0820 | 1.1570 |
No log | 0.1404 | 8 | 1.0653 | 0.0860 | 1.0653 |
No log | 0.1754 | 10 | 1.5739 | 0.0992 | 1.5739 |
No log | 0.2105 | 12 | 1.9614 | 0.1146 | 1.9614 |
No log | 0.2456 | 14 | 1.2558 | 0.1733 | 1.2558 |
No log | 0.2807 | 16 | 0.6707 | 0.2820 | 0.6707 |
No log | 0.3158 | 18 | 0.5871 | 0.3622 | 0.5871 |
No log | 0.3509 | 20 | 0.5667 | 0.4017 | 0.5667 |
No log | 0.3860 | 22 | 0.6361 | 0.3990 | 0.6361 |
No log | 0.4211 | 24 | 0.9147 | 0.3506 | 0.9147 |
No log | 0.4561 | 26 | 1.1469 | 0.3145 | 1.1469 |
No log | 0.4912 | 28 | 1.6471 | 0.2494 | 1.6471 |
No log | 0.5263 | 30 | 1.7498 | 0.2180 | 1.7498 |
No log | 0.5614 | 32 | 1.2305 | 0.2731 | 1.2305 |
No log | 0.5965 | 34 | 0.7970 | 0.3383 | 0.7970 |
No log | 0.6316 | 36 | 0.5754 | 0.4131 | 0.5754 |
No log | 0.6667 | 38 | 0.5427 | 0.4406 | 0.5427 |
No log | 0.7018 | 40 | 0.5537 | 0.4387 | 0.5537 |
No log | 0.7368 | 42 | 0.5912 | 0.4077 | 0.5912 |
No log | 0.7719 | 44 | 0.6607 | 0.3453 | 0.6607 |
No log | 0.8070 | 46 | 0.7386 | 0.3013 | 0.7386 |
No log | 0.8421 | 48 | 0.8577 | 0.2819 | 0.8577 |
No log | 0.8772 | 50 | 0.9544 | 0.2404 | 0.9544 |
No log | 0.9123 | 52 | 0.9673 | 0.2440 | 0.9673 |
No log | 0.9474 | 54 | 0.9583 | 0.2440 | 0.9583 |
No log | 0.9825 | 56 | 0.9484 | 0.2440 | 0.9484 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 0
Model tree for salbatarni/arabert_cross_vocabulary_task5_fold1
Base model
aubmindlab/bert-base-arabertv02