File size: 3,462 Bytes
20f3ccb b856cdd 20f3ccb b856cdd 20f3ccb b856cdd 20f3ccb b856cdd 20f3ccb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: arabert_baseline_development_task5_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert_baseline_development_task5_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0419
- Qwk: 0.5183
- Mse: 1.0419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| No log | 0.3333 | 2 | 1.6380 | 0.0929 | 1.6380 |
| No log | 0.6667 | 4 | 1.5106 | 0.0 | 1.5106 |
| No log | 1.0 | 6 | 1.3471 | 0.0 | 1.3471 |
| No log | 1.3333 | 8 | 1.2335 | 0.0 | 1.2335 |
| No log | 1.6667 | 10 | 1.2261 | 0.1576 | 1.2261 |
| No log | 2.0 | 12 | 1.2476 | 0.2560 | 1.2476 |
| No log | 2.3333 | 14 | 1.2739 | 0.2628 | 1.2739 |
| No log | 2.6667 | 16 | 1.2494 | 0.2628 | 1.2494 |
| No log | 3.0 | 18 | 1.1857 | 0.4062 | 1.1857 |
| No log | 3.3333 | 20 | 1.1005 | 0.3598 | 1.1005 |
| No log | 3.6667 | 22 | 1.1228 | 0.3976 | 1.1228 |
| No log | 4.0 | 24 | 1.0985 | 0.3976 | 1.0985 |
| No log | 4.3333 | 26 | 1.0341 | 0.375 | 1.0341 |
| No log | 4.6667 | 28 | 0.9989 | 0.5789 | 0.9989 |
| No log | 5.0 | 30 | 0.9963 | 0.5789 | 0.9963 |
| No log | 5.3333 | 32 | 0.9852 | 0.5361 | 0.9852 |
| No log | 5.6667 | 34 | 0.9961 | 0.5991 | 0.9961 |
| No log | 6.0 | 36 | 1.0348 | 0.4725 | 1.0348 |
| No log | 6.3333 | 38 | 1.0147 | 0.5238 | 1.0147 |
| No log | 6.6667 | 40 | 1.0123 | 0.5238 | 1.0123 |
| No log | 7.0 | 42 | 0.9900 | 0.5833 | 0.9900 |
| No log | 7.3333 | 44 | 0.9764 | 0.4872 | 0.9764 |
| No log | 7.6667 | 46 | 0.9850 | 0.4872 | 0.9850 |
| No log | 8.0 | 48 | 0.9775 | 0.5327 | 0.9775 |
| No log | 8.3333 | 50 | 0.9860 | 0.5047 | 0.9860 |
| No log | 8.6667 | 52 | 0.9983 | 0.5183 | 0.9983 |
| No log | 9.0 | 54 | 1.0128 | 0.5183 | 1.0128 |
| No log | 9.3333 | 56 | 1.0321 | 0.5183 | 1.0321 |
| No log | 9.6667 | 58 | 1.0407 | 0.5183 | 1.0407 |
| No log | 10.0 | 60 | 1.0419 | 0.5183 | 1.0419 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|