File size: 3,210 Bytes
320865b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ba34950
 
 
 
 
320865b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37af3e1
320865b
 
 
 
 
 
aa08640
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37af3e1
 
320865b
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: apache-2.0
base_model: riotu-lab/ArabianGPT-01B
tags:
- generated_from_trainer
metrics:
- bleu
- rouge
model-index:
- name: results_fixed
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# results_fixed

This model is a fine-tuned version of [riotu-lab/ArabianGPT-01B](https://huggingface.co/riotu-lab/ArabianGPT-01B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1411
- Bleu: 0.2987
- Rouge1: 0.5831
- Rouge2: 0.3405
- Rougel: 0.5413

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0

### Training results

| Training Loss | Epoch  | Step | Bleu   | Validation Loss | Rouge1 | Rouge2 | Rougel |
|:-------------:|:------:|:----:|:------:|:---------------:|:------:|:------:|:------:|
| 3.9816        | 1.0846 | 500  | 0.2270 | 2.8213          | 0.3851 | 0.1625 | 0.3198 |
| 3.9816        | 2.0    | 922  | 0.2481 | 2.4793          | 0.4405 | 0.2030 | 0.3844 |
| 2.6514        | 3.0    | 1383 | 0.2658 | 2.2931          | 0.4828 | 0.2385 | 0.4327 |
| 2.3308        | 4.0    | 1844 | 0.2797 | 2.1801          | 0.5114 | 0.2679 | 0.4659 |
| 2.1322        | 5.0    | 2305 | 0.2886 | 2.1133          | 0.5264 | 0.2867 | 0.4852 |
| 1.9942        | 6.0    | 2766 | 0.2926 | 2.0649          | 0.5417 | 0.2993 | 0.5003 |
| 1.8884        | 7.0    | 3227 | 0.2967 | 2.0354          | 0.5529 | 0.3108 | 0.5117 |
| 1.8003        | 8.0    | 3688 | 0.2973 | 2.0164          | 0.5597 | 0.3199 | 0.5204 |
| 1.7305        | 9.0    | 4149 | 0.3053 | 2.0036          | 0.5660 | 0.3262 | 0.5266 |
| 1.6672        | 10.0   | 4610 | 0.3072 | 1.9933          | 0.5704 | 0.3319 | 0.5325 |
| 1.6132        | 11.0   | 5071 | 0.3093 | 1.9886          | 0.5737 | 0.3366 | 0.5363 |
| 1.5659        | 12.0   | 5532 | 0.3099 | 1.9834          | 0.5777 | 0.3397 | 0.5396 |
| 1.5659        | 13.0   | 5993 | 0.3117 | 1.9819          | 0.5796 | 0.3423 | 0.5418 |
| 1.5244        | 14.0   | 6454 | 0.3126 | 1.9798          | 0.5833 | 0.3452 | 0.5451 |
| 1.4884        | 15.0   | 6915 | 0.3130 | 1.9793          | 0.5832 | 0.3461 | 0.5454 |
| 1.4594        | 16.0   | 7376 | 0.3133 | 1.9800          | 0.5846 | 0.3469 | 0.5466 |
| 1.4361        | 17.0   | 7837 | 0.3151 | 1.9799          | 0.5865 | 0.3493 | 0.5485 |
| 1.4159        | 18.0   | 8298 | 0.3149 | 1.9809          | 0.5865 | 0.3495 | 0.5486 |
| 1.4159        | 19.0   | 8398 | 0.3099 | 2.0767          | 0.5858 | 0.3476 | 0.5471 |
| 1.6189        | 20.0   | 8840 | 0.3119 | 2.0654          | 0.5862 | 0.3489 | 0.5479 |


### Framework versions

- Transformers 4.45.0.dev0
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1