dada22231 commited on
Commit
1d8916a
1 Parent(s): bdbb36e

End of training

Browse files
Files changed (2) hide show
  1. README.md +196 -0
  2. adapter_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ license: apache-2.0
4
+ base_model: tiiuae/falcon-rw-1b
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: 69b7fab1-2f9b-4014-a535-30d75d430a19
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.1`
20
+ ```yaml
21
+ adapter: lora
22
+ base_model: tiiuae/falcon-rw-1b
23
+ bf16: auto
24
+ chat_template: llama3
25
+ cosine_min_lr_ratio: 0.1
26
+ data_processes: 4
27
+ dataset_prepared_path: null
28
+ datasets:
29
+ - data_files:
30
+ - 071b1ad3c6556dc8_train_data.json
31
+ ds_type: json
32
+ format: custom
33
+ num_proc: 4
34
+ path: /workspace/input_data/071b1ad3c6556dc8_train_data.json
35
+ streaming: true
36
+ type:
37
+ field_input: input
38
+ field_instruction: instruction
39
+ field_output: output
40
+ format: '{instruction} {input}'
41
+ no_input_format: '{instruction}'
42
+ system_format: '{system}'
43
+ system_prompt: ''
44
+ debug: null
45
+ deepspeed: null
46
+ device_map:
47
+ lm_head: 3
48
+ model.embed_tokens: 0
49
+ model.layers.0: 0
50
+ model.layers.1: 0
51
+ model.layers.10: 3
52
+ model.layers.11: 3
53
+ model.layers.2: 0
54
+ model.layers.3: 1
55
+ model.layers.4: 1
56
+ model.layers.5: 1
57
+ model.layers.6: 2
58
+ model.layers.7: 2
59
+ model.layers.8: 2
60
+ model.layers.9: 3
61
+ model.norm: 3
62
+ do_eval: true
63
+ early_stopping_patience: 1
64
+ eval_batch_size: 1
65
+ eval_sample_packing: false
66
+ eval_steps: 25
67
+ evaluation_strategy: steps
68
+ flash_attention: false
69
+ fp16: null
70
+ fsdp: null
71
+ fsdp_config: null
72
+ gradient_accumulation_steps: 32
73
+ gradient_checkpointing: true
74
+ group_by_length: true
75
+ hub_model_id: dada22231/69b7fab1-2f9b-4014-a535-30d75d430a19
76
+ hub_strategy: checkpoint
77
+ hub_token: null
78
+ learning_rate: 0.0001
79
+ load_in_4bit: false
80
+ load_in_8bit: false
81
+ local_rank: null
82
+ logging_steps: 1
83
+ lora_alpha: 64
84
+ lora_dropout: 0.05
85
+ lora_fan_in_fan_out: null
86
+ lora_model_dir: null
87
+ lora_r: 32
88
+ lora_target_linear: true
89
+ lora_target_modules:
90
+ - q_proj
91
+ - v_proj
92
+ lr_scheduler: cosine
93
+ max_grad_norm: 0.3
94
+ max_memory:
95
+ 0: 60GB
96
+ 1: 70GB
97
+ 2: 70GB
98
+ 3: 70GB
99
+ cpu: 96GB
100
+ max_steps: 75
101
+ micro_batch_size: 1
102
+ mixed_precision: bf16
103
+ mlflow_experiment_name: /tmp/071b1ad3c6556dc8_train_data.json
104
+ model_type: AutoModelForCausalLM
105
+ num_epochs: 3
106
+ optim_args:
107
+ adam_beta1: 0.9
108
+ adam_beta2: 0.95
109
+ adam_epsilon: 1e-5
110
+ optimizer: adamw_torch
111
+ output_dir: miner_id_24
112
+ pad_to_sequence_len: true
113
+ resume_from_checkpoint: null
114
+ s2_attention: null
115
+ sample_packing: false
116
+ save_steps: 25
117
+ save_strategy: steps
118
+ sequence_len: 2048
119
+ special_tokens:
120
+ pad_token: <|endoftext|>
121
+ strict: false
122
+ tf32: false
123
+ tokenizer_type: AutoTokenizer
124
+ torch_compile: false
125
+ torch_dtype: bfloat16
126
+ train_on_inputs: false
127
+ trust_remote_code: true
128
+ use_cache: false
129
+ val_set_size: 50
130
+ wandb_entity: null
131
+ wandb_mode: online
132
+ wandb_name: 69b7fab1-2f9b-4014-a535-30d75d430a19
133
+ wandb_project: Public_TuningSN
134
+ wandb_runid: 69b7fab1-2f9b-4014-a535-30d75d430a19
135
+ warmup_ratio: 0.05
136
+ weight_decay: 0.01
137
+ xformers_attention: null
138
+
139
+ ```
140
+
141
+ </details><br>
142
+
143
+ # 69b7fab1-2f9b-4014-a535-30d75d430a19
144
+
145
+ This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
146
+ It achieves the following results on the evaluation set:
147
+ - Loss: 1.0455
148
+
149
+ ## Model description
150
+
151
+ More information needed
152
+
153
+ ## Intended uses & limitations
154
+
155
+ More information needed
156
+
157
+ ## Training and evaluation data
158
+
159
+ More information needed
160
+
161
+ ## Training procedure
162
+
163
+ ### Training hyperparameters
164
+
165
+ The following hyperparameters were used during training:
166
+ - learning_rate: 0.0001
167
+ - train_batch_size: 1
168
+ - eval_batch_size: 1
169
+ - seed: 42
170
+ - distributed_type: multi-GPU
171
+ - num_devices: 4
172
+ - gradient_accumulation_steps: 32
173
+ - total_train_batch_size: 128
174
+ - total_eval_batch_size: 4
175
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
176
+ - lr_scheduler_type: cosine
177
+ - lr_scheduler_warmup_steps: 3
178
+ - training_steps: 75
179
+
180
+ ### Training results
181
+
182
+ | Training Loss | Epoch | Step | Validation Loss |
183
+ |:-------------:|:------:|:----:|:---------------:|
184
+ | 43.7219 | 0.0011 | 1 | 1.3181 |
185
+ | 38.7784 | 0.0265 | 25 | 1.1135 |
186
+ | 37.6346 | 0.0531 | 50 | 1.0646 |
187
+ | 37.9665 | 0.0796 | 75 | 1.0455 |
188
+
189
+
190
+ ### Framework versions
191
+
192
+ - PEFT 0.13.2
193
+ - Transformers 4.46.0
194
+ - Pytorch 2.5.0+cu124
195
+ - Datasets 3.0.1
196
+ - Tokenizers 0.20.1
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2ae5b4afa386ae47f9cd7718b46aa18d69060be33622bd4d68bf6442f876d94
3
+ size 100734154