File size: 3,462 Bytes
211dbbb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: arabert_baseline_development_task7_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert_baseline_development_task7_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4808
- Qwk: 0.6800
- Mse: 0.4658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| No log | 0.3333 | 2 | 1.1708 | 0.1401 | 1.1442 |
| No log | 0.6667 | 4 | 0.6797 | 0.4733 | 0.6689 |
| No log | 1.0 | 6 | 0.8726 | 0.35 | 0.8601 |
| No log | 1.3333 | 8 | 0.6330 | 0.4407 | 0.6258 |
| No log | 1.6667 | 10 | 0.4729 | 0.7317 | 0.4705 |
| No log | 2.0 | 12 | 0.4560 | 0.7727 | 0.4525 |
| No log | 2.3333 | 14 | 0.4835 | 0.5161 | 0.4748 |
| No log | 2.6667 | 16 | 0.6021 | 0.4361 | 0.5871 |
| No log | 3.0 | 18 | 0.5490 | 0.5036 | 0.5343 |
| No log | 3.3333 | 20 | 0.4924 | 0.5772 | 0.4790 |
| No log | 3.6667 | 22 | 0.4506 | 0.6531 | 0.4384 |
| No log | 4.0 | 24 | 0.4816 | 0.6710 | 0.4656 |
| No log | 4.3333 | 26 | 0.5471 | 0.5818 | 0.5260 |
| No log | 4.6667 | 28 | 0.5883 | 0.4286 | 0.5652 |
| No log | 5.0 | 30 | 0.5645 | 0.4971 | 0.5423 |
| No log | 5.3333 | 32 | 0.4974 | 0.5823 | 0.4779 |
| No log | 5.6667 | 34 | 0.4623 | 0.6369 | 0.4450 |
| No log | 6.0 | 36 | 0.4534 | 0.6980 | 0.4376 |
| No log | 6.3333 | 38 | 0.4859 | 0.6369 | 0.4690 |
| No log | 6.6667 | 40 | 0.5046 | 0.5823 | 0.4879 |
| No log | 7.0 | 42 | 0.4998 | 0.6225 | 0.4840 |
| No log | 7.3333 | 44 | 0.4877 | 0.6434 | 0.4725 |
| No log | 7.6667 | 46 | 0.4853 | 0.6434 | 0.4702 |
| No log | 8.0 | 48 | 0.4746 | 0.6434 | 0.4601 |
| No log | 8.3333 | 50 | 0.4732 | 0.6434 | 0.4588 |
| No log | 8.6667 | 52 | 0.4819 | 0.6434 | 0.4671 |
| No log | 9.0 | 54 | 0.4873 | 0.6225 | 0.4722 |
| No log | 9.3333 | 56 | 0.4833 | 0.6800 | 0.4684 |
| No log | 9.6667 | 58 | 0.4812 | 0.6800 | 0.4662 |
| No log | 10.0 | 60 | 0.4808 | 0.6800 | 0.4658 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|