|
--- |
|
base_model: aubmindlab/bert-base-arabertv02 |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: arabert_baseline_style_task5_fold1 |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# arabert_baseline_style_task5_fold1 |
|
|
|
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.4351 |
|
- Qwk: 0.7772 |
|
- Mse: 0.4351 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 2e-05 |
|
- train_batch_size: 16 |
|
- eval_batch_size: 16 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 10 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | |
|
|:-------------:|:------:|:----:|:---------------:|:------:|:------:| |
|
| No log | 0.3333 | 2 | 3.0281 | 0.0186 | 3.0281 | |
|
| No log | 0.6667 | 4 | 1.1102 | 0.0 | 1.1102 | |
|
| No log | 1.0 | 6 | 0.6207 | 0.3478 | 0.6207 | |
|
| No log | 1.3333 | 8 | 0.5268 | 0.3462 | 0.5268 | |
|
| No log | 1.6667 | 10 | 0.5443 | 0.2941 | 0.5443 | |
|
| No log | 2.0 | 12 | 0.4954 | 0.2941 | 0.4954 | |
|
| No log | 2.3333 | 14 | 0.3975 | 0.3314 | 0.3975 | |
|
| No log | 2.6667 | 16 | 0.3492 | 0.4828 | 0.3492 | |
|
| No log | 3.0 | 18 | 0.3545 | 0.5238 | 0.3545 | |
|
| No log | 3.3333 | 20 | 0.3763 | 0.6839 | 0.3763 | |
|
| No log | 3.6667 | 22 | 0.4022 | 0.7472 | 0.4022 | |
|
| No log | 4.0 | 24 | 0.4557 | 0.7772 | 0.4557 | |
|
| No log | 4.3333 | 26 | 0.4928 | 0.7549 | 0.4928 | |
|
| No log | 4.6667 | 28 | 0.4922 | 0.7549 | 0.4922 | |
|
| No log | 5.0 | 30 | 0.4857 | 0.6939 | 0.4857 | |
|
| No log | 5.3333 | 32 | 0.4759 | 0.6939 | 0.4759 | |
|
| No log | 5.6667 | 34 | 0.4948 | 0.7549 | 0.4948 | |
|
| No log | 6.0 | 36 | 0.4684 | 0.7772 | 0.4684 | |
|
| No log | 6.3333 | 38 | 0.4446 | 0.7772 | 0.4446 | |
|
| No log | 6.6667 | 40 | 0.4305 | 0.7772 | 0.4305 | |
|
| No log | 7.0 | 42 | 0.4345 | 0.7222 | 0.4345 | |
|
| No log | 7.3333 | 44 | 0.4345 | 0.6324 | 0.4345 | |
|
| No log | 7.6667 | 46 | 0.4215 | 0.75 | 0.4215 | |
|
| No log | 8.0 | 48 | 0.4192 | 0.7772 | 0.4192 | |
|
| No log | 8.3333 | 50 | 0.4335 | 0.7772 | 0.4335 | |
|
| No log | 8.6667 | 52 | 0.4440 | 0.7549 | 0.4440 | |
|
| No log | 9.0 | 54 | 0.4449 | 0.7549 | 0.4449 | |
|
| No log | 9.3333 | 56 | 0.4405 | 0.7549 | 0.4405 | |
|
| No log | 9.6667 | 58 | 0.4375 | 0.7772 | 0.4375 | |
|
| No log | 10.0 | 60 | 0.4351 | 0.7772 | 0.4351 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.44.0 |
|
- Pytorch 2.4.0 |
|
- Datasets 2.21.0 |
|
- Tokenizers 0.19.1 |
|
|