lesso commited on
Commit
421f126
1 Parent(s): 8940205

End of training

Browse files
README.md CHANGED
@@ -105,7 +105,7 @@ xformers_attention: null
105
 
106
  This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
107
  It achieves the following results on the evaluation set:
108
- - Loss: 2.1792
109
 
110
  ## Model description
111
 
@@ -128,11 +128,8 @@ The following hyperparameters were used during training:
128
  - train_batch_size: 1
129
  - eval_batch_size: 1
130
  - seed: 42
131
- - distributed_type: multi-GPU
132
- - num_devices: 2
133
  - gradient_accumulation_steps: 4
134
- - total_train_batch_size: 8
135
- - total_eval_batch_size: 2
136
  - optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
137
  - lr_scheduler_type: cosine
138
  - lr_scheduler_warmup_steps: 10
@@ -143,10 +140,10 @@ The following hyperparameters were used during training:
143
 
144
  | Training Loss | Epoch | Step | Validation Loss |
145
  |:-------------:|:------:|:----:|:---------------:|
146
- | 2.3204 | 0.0054 | 1 | 2.2009 |
147
- | 2.2241 | 0.0162 | 3 | 2.1992 |
148
- | 2.4093 | 0.0324 | 6 | 2.1916 |
149
- | 2.2467 | 0.0486 | 9 | 2.1792 |
150
 
151
 
152
  ### Framework versions
 
105
 
106
  This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
107
  It achieves the following results on the evaluation set:
108
+ - Loss: 2.1841
109
 
110
  ## Model description
111
 
 
128
  - train_batch_size: 1
129
  - eval_batch_size: 1
130
  - seed: 42
 
 
131
  - gradient_accumulation_steps: 4
132
+ - total_train_batch_size: 4
 
133
  - optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
134
  - lr_scheduler_type: cosine
135
  - lr_scheduler_warmup_steps: 10
 
140
 
141
  | Training Loss | Epoch | Step | Validation Loss |
142
  |:-------------:|:------:|:----:|:---------------:|
143
+ | 2.4235 | 0.0027 | 1 | 2.2008 |
144
+ | 2.6546 | 0.0081 | 3 | 2.1996 |
145
+ | 2.3495 | 0.0162 | 6 | 2.1935 |
146
+ | 2.3295 | 0.0243 | 9 | 2.1841 |
147
 
148
 
149
  ### Framework versions
adapter_config.json CHANGED
@@ -20,13 +20,13 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "v_proj",
24
- "up_proj",
25
- "o_proj",
26
- "down_proj",
27
- "q_proj",
28
  "gate_proj",
29
- "k_proj"
 
 
 
 
 
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
 
 
 
 
23
  "gate_proj",
24
+ "q_proj",
25
+ "k_proj",
26
+ "down_proj",
27
+ "up_proj",
28
+ "v_proj",
29
+ "o_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6dd44e7c4608b202bc0f01c2f4a3f50f7ecd28e7fd03c4c0097240dbaadfaf2b
3
  size 84047370
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74735c2e63d406e7970a6509b86dea08dac56b622c28910801aff96f76320d50
3
  size 84047370
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dd8d602b8c8217d2bad01c439cf265d8636e230eef9088012f7f0abb56d28cf2
3
  size 83945296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0304deb94d880a7178e25c4a9e95080cad45ede8dec464157f33bd332f16447
3
  size 83945296
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:46bccd6964f4291101ddc4731a06bfcf4544fcbb608a5a0128bc2068c8364f7b
3
  size 6776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5b80978b6cdcc9339ecc7931db757328a1ca9deca5507d6af70e88c187855a9
3
  size 6776