alexander-hm commited on
Commit
305d300
·
verified ·
1 Parent(s): 85fe5ca

End of training

Browse files
Files changed (7) hide show
  1. README.md +115 -0
  2. all_results.json +12 -0
  3. completed +0 -0
  4. eval_results.json +7 -0
  5. metrics.json +1 -0
  6. train_results.json +8 -0
  7. trainer_state.json +0 -0
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/gemma-7b
3
+ library_name: peft
4
+ license: gemma
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: gemma-7b_alpaca-clean_l0.0002_32-32
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # gemma-7b_alpaca-clean_l0.0002_32-32
16
+
17
+ This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 2.2862
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 0.0002
39
+ - train_batch_size: 1
40
+ - eval_batch_size: 1
41
+ - seed: 0
42
+ - gradient_accumulation_steps: 16
43
+ - total_train_batch_size: 16
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: constant
46
+ - lr_scheduler_warmup_ratio: 0.03
47
+ - training_steps: 10000
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss |
52
+ |:-------------:|:------:|:----:|:---------------:|
53
+ | 1.1466 | 0.0003 | 1 | 2.6494 |
54
+ | 2.2743 | 0.0590 | 187 | 1.8818 |
55
+ | 1.4643 | 0.1179 | 374 | 1.8742 |
56
+ | 1.1845 | 0.1769 | 561 | 1.9893 |
57
+ | 2.2425 | 0.2359 | 748 | 1.9409 |
58
+ | 1.9557 | 0.2949 | 935 | 1.8788 |
59
+ | 1.448 | 0.3538 | 1122 | 1.8471 |
60
+ | 1.2879 | 0.4128 | 1309 | 1.8804 |
61
+ | 2.2375 | 0.4718 | 1496 | 1.8555 |
62
+ | 1.778 | 0.5307 | 1683 | 1.8499 |
63
+ | 1.4082 | 0.5897 | 1870 | 1.8627 |
64
+ | 1.3452 | 0.6487 | 2057 | 1.8682 |
65
+ | 2.8115 | 0.7077 | 2244 | 1.8803 |
66
+ | 1.8711 | 0.7666 | 2431 | 1.8475 |
67
+ | 1.2821 | 0.8256 | 2618 | 1.8560 |
68
+ | 1.2943 | 0.8846 | 2805 | 1.8666 |
69
+ | 2.535 | 0.9436 | 2992 | 1.8579 |
70
+ | 0.9723 | 1.0025 | 3179 | 1.8711 |
71
+ | 2.962 | 1.0615 | 3366 | 1.9227 |
72
+ | 1.3686 | 1.1205 | 3553 | 1.9320 |
73
+ | 1.1434 | 1.1794 | 3740 | 1.9103 |
74
+ | 1.0128 | 1.2384 | 3927 | 1.9004 |
75
+ | 2.2098 | 1.2974 | 4114 | 1.9571 |
76
+ | 1.0847 | 1.3564 | 4301 | 1.9256 |
77
+ | 1.0635 | 1.4153 | 4488 | 1.9156 |
78
+ | 1.242 | 1.4743 | 4675 | 1.9359 |
79
+ | 2.2656 | 1.5333 | 4862 | 1.9373 |
80
+ | 1.4033 | 1.5922 | 5049 | 1.9102 |
81
+ | 1.066 | 1.6512 | 5236 | 1.9053 |
82
+ | 1.214 | 1.7102 | 5423 | 1.9475 |
83
+ | 2.0875 | 1.7692 | 5610 | 1.9373 |
84
+ | 1.1555 | 1.8281 | 5797 | 1.9202 |
85
+ | 1.0816 | 1.8871 | 5984 | 1.9039 |
86
+ | 2.9213 | 1.9461 | 6171 | 1.9437 |
87
+ | 0.7327 | 2.0050 | 6358 | 1.9802 |
88
+ | 0.9288 | 2.0640 | 6545 | 2.1237 |
89
+ | 1.4847 | 2.1230 | 6732 | 2.2272 |
90
+ | 0.8673 | 2.1820 | 6919 | 2.0954 |
91
+ | 0.8972 | 2.2409 | 7106 | 2.0114 |
92
+ | 1.171 | 2.2999 | 7293 | 2.2171 |
93
+ | 1.3381 | 2.3589 | 7480 | 2.1423 |
94
+ | 1.0032 | 2.4178 | 7667 | 2.0822 |
95
+ | 0.8967 | 2.4768 | 7854 | 1.9955 |
96
+ | 1.3569 | 2.5358 | 8041 | 2.1730 |
97
+ | 1.552 | 2.5948 | 8228 | 2.0954 |
98
+ | 0.9403 | 2.6537 | 8415 | 2.0874 |
99
+ | 0.8441 | 2.7127 | 8602 | 1.9917 |
100
+ | 2.0487 | 2.7717 | 8789 | 2.1445 |
101
+ | 1.3355 | 2.8307 | 8976 | 2.0624 |
102
+ | 0.9621 | 2.8896 | 9163 | 2.0430 |
103
+ | 0.9307 | 2.9486 | 9350 | 2.0186 |
104
+ | 0.6211 | 3.0076 | 9537 | 2.2474 |
105
+ | 0.6472 | 3.0665 | 9724 | 2.1474 |
106
+ | 1.5749 | 3.1255 | 9911 | 2.3950 |
107
+
108
+
109
+ ### Framework versions
110
+
111
+ - PEFT 0.12.1.dev0
112
+ - Transformers 4.45.0.dev0
113
+ - Pytorch 2.3.0+cu121
114
+ - Datasets 2.19.0
115
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.15357931251971,
3
+ "eval_loss": 2.2862391471862793,
4
+ "eval_runtime": 568.5344,
5
+ "eval_samples_per_second": 1.759,
6
+ "eval_steps_per_second": 1.759,
7
+ "total_flos": 1.237785300679127e+18,
8
+ "train_loss": 1.4761439972877501,
9
+ "train_runtime": 420789.1396,
10
+ "train_samples_per_second": 0.38,
11
+ "train_steps_per_second": 0.024
12
+ }
completed ADDED
File without changes
eval_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.15357931251971,
3
+ "eval_loss": 2.2862391471862793,
4
+ "eval_runtime": 568.5344,
5
+ "eval_samples_per_second": 1.759,
6
+ "eval_steps_per_second": 1.759
7
+ }
metrics.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"run_name": "google/gemma-7b_alpaca-clean_l0.0002_32,32", "train_runtime": 420789.1396, "train_samples_per_second": 0.38, "train_steps_per_second": 0.024, "total_flos": 1.237785300679127e+18, "train_loss": 1.4761439972877501, "epoch": 3.15357931251971, "eval_loss": 2.2862391471862793, "eval_runtime": 568.5344, "eval_samples_per_second": 1.759, "eval_steps_per_second": 1.759}
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.15357931251971,
3
+ "total_flos": 1.237785300679127e+18,
4
+ "train_loss": 1.4761439972877501,
5
+ "train_runtime": 420789.1396,
6
+ "train_samples_per_second": 0.38,
7
+ "train_steps_per_second": 0.024
8
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff