e-hossam96 commited on
Commit
72a4d62
1 Parent(s): 62c866d

added more details to model

Browse files
README.md CHANGED
@@ -3,40 +3,61 @@ library_name: transformers
3
  license: mit
4
  base_model: openai-community/gpt2
5
  tags:
6
- - generated_from_trainer
7
  model-index:
8
- - name: arabic-nano-gpt-v1
9
- results: []
10
  datasets:
11
- - wikimedia/wikipedia
12
  language:
13
- - ar
14
  ---
15
 
16
-
17
  # arabic-nano-gpt-v1
18
 
19
- This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on an unknown dataset.
20
- It achieves the following results on the held-out test set:
 
 
 
 
21
  - Loss: 3.02885
22
 
23
- ## Model description
24
 
25
- More information needed
 
 
26
 
27
- ## Intended uses & limitations
 
28
 
29
- More information needed
30
 
31
- ## Training and evaluation data
 
 
 
 
 
 
 
 
 
32
 
33
- More information needed
 
 
 
 
34
 
35
- ## Training procedure
36
 
37
- ### Training hyperparameters
 
 
38
 
39
  The following hyperparameters were used during training:
 
40
  - learning_rate: 0.0002
41
  - train_batch_size: 32
42
  - eval_batch_size: 32
@@ -48,63 +69,17 @@ The following hyperparameters were used during training:
48
  - lr_scheduler_warmup_ratio: 0.01
49
  - num_epochs: 24
50
 
51
- <!-- ### Training results -->
52
-
53
- <!-- | Training Loss | Epoch | Step | Validation Loss |
54
- |:-------------:|:-------:|:------:|:---------------:|
55
- | 4.1743 | 0.5849 | 5000 | 3.6616 |
56
- | 3.6165 | 1.1698 | 10000 | 3.4256 |
57
- | 3.5241 | 1.7547 | 15000 | 3.3273 |
58
- | 3.4341 | 2.3396 | 20000 | 3.2706 |
59
- | 3.4023 | 2.9245 | 25000 | 3.2331 |
60
- | 3.3652 | 3.5094 | 30000 | 3.2024 |
61
- | 3.347 | 4.0943 | 35000 | 3.1826 |
62
- | 3.3223 | 4.6791 | 40000 | 3.1637 |
63
- | 3.3107 | 5.2640 | 45000 | 3.1526 |
64
- | 3.2985 | 5.8489 | 50000 | 3.1370 |
65
- | 3.2873 | 6.4338 | 55000 | 3.1296 |
66
- | 3.2758 | 7.0187 | 60000 | 3.1190 |
67
- | 3.2686 | 7.6036 | 65000 | 3.1105 |
68
- | 3.2568 | 8.1885 | 70000 | 3.1042 |
69
- | 3.2546 | 8.7734 | 75000 | 3.0982 |
70
- | 3.248 | 9.3583 | 80000 | 3.0925 |
71
- | 3.2431 | 9.9432 | 85000 | 3.0881 |
72
- | 3.2371 | 10.5281 | 90000 | 3.0820 |
73
- | 3.2346 | 11.1130 | 95000 | 3.0784 |
74
- | 3.2273 | 11.6979 | 100000 | 3.0747 |
75
- | 3.2207 | 12.2828 | 105000 | 3.0701 |
76
- | 3.2191 | 12.8677 | 110000 | 3.0665 |
77
- | 3.2148 | 13.4526 | 115000 | 3.0638 |
78
- | 3.2132 | 14.0374 | 120000 | 3.0594 |
79
- | 3.2079 | 14.6223 | 125000 | 3.0580 |
80
- | 3.204 | 15.2072 | 130000 | 3.0549 |
81
- | 3.2035 | 15.7921 | 135000 | 3.0512 |
82
- | 3.1999 | 16.3770 | 140000 | 3.0473 |
83
- | 3.2001 | 16.9619 | 145000 | 3.0462 |
84
- | 3.1957 | 17.5468 | 150000 | 3.0432 |
85
- | 3.1948 | 18.1317 | 155000 | 3.0417 |
86
- | 3.19 | 18.7166 | 160000 | 3.0394 |
87
- | 3.1873 | 19.3015 | 165000 | 3.0384 |
88
- | 3.1848 | 19.8864 | 170000 | 3.0367 |
89
- | 3.1826 | 20.4713 | 175000 | 3.0334 |
90
- | 3.1839 | 21.0562 | 180000 | 3.0325 |
91
- | 3.1818 | 21.6411 | 185000 | 3.0314 |
92
- | 3.1775 | 22.2260 | 190000 | 3.0295 |
93
- | 3.1747 | 22.8109 | 195000 | 3.0284 |
94
- | 3.1724 | 23.3957 | 200000 | 3.0273 |
95
- | 3.1757 | 23.9806 | 205000 | 3.0267 | -->
96
-
97
- ### Training Loss
98
-
99
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ccee86374057a338e03c1e/WIQvnj-VCCBqvsUlJZ1K_.png)
100
-
101
- ### Validation Loss
102
-
103
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ccee86374057a338e03c1e/DmTh4sIODlv1wrxXcedxL.png)
104
-
105
- ### Framework versions
106
 
107
  - Transformers 4.45.2
108
  - Pytorch 2.5.0
109
  - Datasets 3.0.1
110
- - Tokenizers 0.20.1
 
3
  license: mit
4
  base_model: openai-community/gpt2
5
  tags:
6
+ - generated_from_trainer
7
  model-index:
8
+ - name: arabic-nano-gpt-v1
9
+ results: []
10
  datasets:
11
+ - wikimedia/wikipedia
12
  language:
13
+ - ar
14
  ---
15
 
 
16
  # arabic-nano-gpt-v1
17
 
18
+ This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the arabic [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
19
+
20
+ Repository on GitHub: [e-hossam96/arabic-nano-gpt](https://github.com/e-hossam96/arabic-nano-gpt.git)
21
+
22
+ The model achieves the following results on the held-out test set:
23
+
24
  - Loss: 3.02885
25
 
26
+ ## How to Use
27
 
28
+ ```python
29
+ import torch
30
+ from transformers import pipeline
31
 
32
+ model_ckpt = "e-hossam96/arabic-nano-gpt-v1"
33
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
34
 
 
35
 
36
+ lm = pipeline(task="text-generation", model=model_ckpt, device=device)
37
+
38
+ prompt = """المحرك النفاث هو محرك ينفث الموائع (الماء أو الهواء) بسرعة فائقة \
39
+ لينتج قوة دافعة اعتمادا على مبدأ قانون نيوتن الثالث للحركة. \
40
+ هذا التعريف الواسع للمحركات النفاثة يتضمن أيضا"""
41
+
42
+ output = lm(prompt, max_new_tokens=128)
43
+
44
+ print(output[0]["generated_text"])
45
+ ```
46
 
47
+ ## Model description
48
+
49
+ - Embedding Size: 384
50
+ - Attention Heads: 4
51
+ - Attention Layers: 4
52
 
53
+ ## Training and evaluation data
54
 
55
+ The entire wikipedia dataset was split into three splits based on the 90-5-5 ratios.
56
+
57
+ ## Training hyperparameters
58
 
59
  The following hyperparameters were used during training:
60
+
61
  - learning_rate: 0.0002
62
  - train_batch_size: 32
63
  - eval_batch_size: 32
 
69
  - lr_scheduler_warmup_ratio: 0.01
70
  - num_epochs: 24
71
 
72
+ ## Training Loss
73
+
74
+ ![Training Loss](assets/arabic-nano-gpt-v1-train-loss.png)
75
+
76
+ ## Validation Loss
77
+
78
+ ![Validation Loss](assets/arabic-nano-gpt-v1-eval-loss.png)
79
+
80
+ ## Framework versions
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81
 
82
  - Transformers 4.45.2
83
  - Pytorch 2.5.0
84
  - Datasets 3.0.1
85
+ - Tokenizers 0.20.1
assets/arabic-nano-gpt-v1-eval-loss.png ADDED
assets/arabic-nano-gpt-v1-train-loss.png ADDED