arabert_baseline_grammar_task2_fold0
This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.2738
- Qwk: 0.0491
- Mse: 1.2508
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
---|---|---|---|---|---|
No log | 0.3333 | 2 | 4.9005 | 0.0126 | 4.8502 |
No log | 0.6667 | 4 | 2.4699 | 0.0387 | 2.4471 |
No log | 1.0 | 6 | 1.4800 | -0.0982 | 1.4570 |
No log | 1.3333 | 8 | 1.0496 | 0.0541 | 1.0294 |
No log | 1.6667 | 10 | 1.2284 | 0.0 | 1.2139 |
No log | 2.0 | 12 | 1.4327 | 0.0 | 1.4198 |
No log | 2.3333 | 14 | 1.4042 | 0.0 | 1.3911 |
No log | 2.6667 | 16 | 1.1140 | -0.0161 | 1.0974 |
No log | 3.0 | 18 | 1.0138 | 0.1083 | 0.9870 |
No log | 3.3333 | 20 | 1.0084 | 0.2200 | 0.9831 |
No log | 3.6667 | 22 | 1.0806 | 0.0696 | 1.0592 |
No log | 4.0 | 24 | 1.2446 | -0.1170 | 1.2246 |
No log | 4.3333 | 26 | 1.4549 | -0.0960 | 1.4445 |
No log | 4.6667 | 28 | 1.3332 | 0.0585 | 1.3205 |
No log | 5.0 | 30 | 1.0953 | 0.0870 | 1.0641 |
No log | 5.3333 | 32 | 1.0705 | 0.0616 | 1.0381 |
No log | 5.6667 | 34 | 1.1503 | 0.1064 | 1.1300 |
No log | 6.0 | 36 | 1.2818 | 0.0585 | 1.2670 |
No log | 6.3333 | 38 | 1.3332 | -0.0161 | 1.3186 |
No log | 6.6667 | 40 | 1.2686 | 0.0491 | 1.2516 |
No log | 7.0 | 42 | 1.1870 | 0.0491 | 1.1658 |
No log | 7.3333 | 44 | 1.1491 | 0.0491 | 1.1243 |
No log | 7.6667 | 46 | 1.1946 | 0.0491 | 1.1709 |
No log | 8.0 | 48 | 1.3196 | 0.0491 | 1.3003 |
No log | 8.3333 | 50 | 1.3925 | 0.0541 | 1.3747 |
No log | 8.6667 | 52 | 1.3818 | 0.0541 | 1.3633 |
No log | 9.0 | 54 | 1.3491 | 0.0491 | 1.3293 |
No log | 9.3333 | 56 | 1.3127 | 0.0491 | 1.2916 |
No log | 9.6667 | 58 | 1.2864 | 0.0491 | 1.2640 |
No log | 10.0 | 60 | 1.2738 | 0.0491 | 1.2508 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 0
Model tree for salbatarni/arabert_baseline_grammar_task2_fold0
Base model
aubmindlab/bert-base-arabertv02