File size: 3,035 Bytes
320865b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19b7a96
 
 
 
 
320865b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
license: apache-2.0
base_model: riotu-lab/ArabianGPT-01B
tags:
- generated_from_trainer
metrics:
- bleu
- rouge
model-index:
- name: results_fixed
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# results_fixed

This model is a fine-tuned version of [riotu-lab/ArabianGPT-01B](https://huggingface.co/riotu-lab/ArabianGPT-01B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9793
- Bleu: 0.3130
- Rouge1: 0.5832
- Rouge2: 0.3461
- Rougel: 0.5454

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20.0

### Training results

| Training Loss | Epoch  | Step | Bleu   | Validation Loss | Rouge1 | Rouge2 | Rougel |
|:-------------:|:------:|:----:|:------:|:---------------:|:------:|:------:|:------:|
| 3.9816        | 1.0846 | 500  | 0.2270 | 2.8213          | 0.3851 | 0.1625 | 0.3198 |
| 3.9816        | 2.0    | 922  | 2.4793 | 0.2481          | 0.4405 | 0.2030 | 0.3844 |
| 2.6514        | 3.0    | 1383 | 2.2931 | 0.2658          | 0.4828 | 0.2385 | 0.4327 |
| 2.3308        | 4.0    | 1844 | 2.1801 | 0.2797          | 0.5114 | 0.2679 | 0.4659 |
| 2.1322        | 5.0    | 2305 | 2.1133 | 0.2886          | 0.5264 | 0.2867 | 0.4852 |
| 1.9942        | 6.0    | 2766 | 2.0649 | 0.2926          | 0.5417 | 0.2993 | 0.5003 |
| 1.8884        | 7.0    | 3227 | 2.0354 | 0.2967          | 0.5529 | 0.3108 | 0.5117 |
| 1.8003        | 8.0    | 3688 | 2.0164 | 0.2973          | 0.5597 | 0.3199 | 0.5204 |
| 1.7305        | 9.0    | 4149 | 2.0036 | 0.3053          | 0.5660 | 0.3262 | 0.5266 |
| 1.6672        | 10.0   | 4610 | 1.9933 | 0.3072          | 0.5704 | 0.3319 | 0.5325 |
| 1.6132        | 11.0   | 5071 | 1.9886 | 0.3093          | 0.5737 | 0.3366 | 0.5363 |
| 1.5659        | 12.0   | 5532 | 1.9834 | 0.3099          | 0.5777 | 0.3397 | 0.5396 |
| 1.5659        | 13.0   | 5993 | 1.9819 | 0.3117          | 0.5796 | 0.3423 | 0.5418 |
| 1.5244        | 14.0   | 6454 | 1.9798 | 0.3126          | 0.5833 | 0.3452 | 0.5451 |
| 1.4884        | 15.0   | 6915 | 1.9793 | 0.3130          | 0.5832 | 0.3461 | 0.5454 |
| 1.4594        | 16.0   | 7376 | 1.9800 | 0.3133          | 0.5846 | 0.3469 | 0.5466 |
| 1.4361        | 17.0   | 7837 | 1.9799 | 0.3151          | 0.5865 | 0.3493 | 0.5485 |
| 1.4159        | 18.0   | 8298 | 1.9809 | 0.3149          | 0.5865 | 0.3495 | 0.5486 |


### Framework versions

- Transformers 4.45.0.dev0
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1