neginashz commited on
Commit
ec4f42d
1 Parent(s): 84b4d10

End of training

Browse files
README.md ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: Qwen/Qwen2.5-7B-Instruct
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ datasets:
9
+ - medalpaca/medical_meadow_medqa
10
+ model-index:
11
+ - name: qwen2-ins-full-fsdp
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
19
+ <details><summary>See axolotl config</summary>
20
+
21
+ axolotl version: `0.6.0`
22
+ ```yaml
23
+ base_model: Qwen/Qwen2.5-7B-Instruct
24
+ trust_remote_code: true
25
+
26
+ load_in_8bit:
27
+ load_in_4bit:
28
+ strict: false
29
+
30
+ datasets:
31
+ - path: medalpaca/medical_meadow_medqa
32
+ type: alpaca
33
+ dataset_prepared_path:
34
+ val_set_size: 0.2
35
+ output_dir: ./fulloutputs/out
36
+
37
+ sequence_len: 8192
38
+ sample_packing: true
39
+ eval_sample_packing: true
40
+ pad_to_sequence_len: true
41
+
42
+
43
+ wandb_project: full-ft-qwen
44
+ wandb_entity:
45
+ wandb_watch:
46
+ wandb_name:
47
+ wandb_log_model:
48
+
49
+ gradient_accumulation_steps: 1
50
+ micro_batch_size: 1
51
+ num_epochs: 3
52
+ optimizer: adamw_torch
53
+ lr_scheduler: cosine
54
+ learning_rate: 0.00002
55
+
56
+ train_on_inputs: false
57
+ group_by_length: false
58
+ bf16: true
59
+ fp16: false
60
+ tf32: false
61
+
62
+ gradient_checkpointing: true
63
+ gradient_checkpointing_kwargs:
64
+ use_reentrant: false
65
+ early_stopping_patience:
66
+ resume_from_checkpoint:
67
+ local_rank:
68
+ logging_steps: 10
69
+ xformers_attention:
70
+ flash_attention: true
71
+
72
+ warmup_steps:
73
+ eval_steps: 100
74
+ save_steps: 100
75
+ debug:
76
+ deepspeed: deepspeed_configs/zero2.json
77
+ weight_decay: 0.05
78
+ fsdp:
79
+ fsdp_config:
80
+ special_tokens:
81
+
82
+ hub_model_id: neginashz/qwen2-ins-full-fsdp
83
+ early_stopping_patience: 3
84
+
85
+ ```
86
+
87
+ </details><br>
88
+
89
+ # qwen2-ins-full-fsdp
90
+
91
+ This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the medalpaca/medical_meadow_medqa dataset.
92
+ It achieves the following results on the evaluation set:
93
+ - Loss: 0.1810
94
+
95
+ ## Model description
96
+
97
+ More information needed
98
+
99
+ ## Intended uses & limitations
100
+
101
+ More information needed
102
+
103
+ ## Training and evaluation data
104
+
105
+ More information needed
106
+
107
+ ## Training procedure
108
+
109
+ ### Training hyperparameters
110
+
111
+ The following hyperparameters were used during training:
112
+ - learning_rate: 2e-05
113
+ - train_batch_size: 1
114
+ - eval_batch_size: 1
115
+ - seed: 42
116
+ - distributed_type: multi-GPU
117
+ - num_devices: 4
118
+ - total_train_batch_size: 4
119
+ - total_eval_batch_size: 4
120
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
121
+ - lr_scheduler_type: cosine
122
+ - lr_scheduler_warmup_steps: 6
123
+ - num_epochs: 3
124
+
125
+ ### Training results
126
+
127
+ | Training Loss | Epoch | Step | Validation Loss |
128
+ |:-------------:|:------:|:----:|:---------------:|
129
+ | 0.0548 | 1.3889 | 100 | 0.1461 |
130
+ | 0.0061 | 2.7778 | 200 | 0.1810 |
131
+
132
+
133
+ ### Framework versions
134
+
135
+ - Transformers 4.47.0
136
+ - Pytorch 2.5.1+cu124
137
+ - Datasets 3.1.0
138
+ - Tokenizers 0.21.0
model-00001-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dad47b4c35ceb4273fe0665a67f153b7880f5f4848a384cc208524de55e78fc2
3
  size 4877660776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c166b0b3938d20350db0faefed20dc5cc1684c087dc1165ae351c3fb5086e80
3
  size 4877660776
model-00002-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:860d436fa8a5ccc5f902e26400daafeba3dac6ee7fbefeae67799984ed8c249d
3
  size 4932751008
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d2233b21014fa0bf419804e9c4e80b7e893998db8e3cfe6b715dfb0b8d171df
3
  size 4932751008
model-00003-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:470afe0aa9c41982667f078b9374bdb3767e1b80a7adc45d0306ebf493fd8e34
3
  size 4330865200
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ef2051b1a2b140efdf6f5fc4b3396c6348979ab8b9dd20499eae3b81a8c9b50
3
  size 4330865200
model-00004-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0ad0af33a8cf7745eda7a6307de49978d7d080d2b02c168dac57e3d92ec29467
3
  size 1089994880
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bc90ee92fffd88e2b9a112ce123c4785f5c2955ca559cd24af56b614bd261b7
3
  size 1089994880
pytorch_model-00001-of-00002.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3fdbfd213762e5522e532223b3431fd47057d929af0d8ab771daf974217bfd79
3
  size 15231010809
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b28b821e81ce509df9440908b452ab132ff94022287e36914e27a71dccf04a6
3
  size 15231010809
pytorch_model-00002-of-00002.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:113bf5cec24f541796d827ad87e9e14d0cc8d19924c61575f441f13f1e5e63dc
3
  size 269317
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d9fdea45c4f420f5b9cd19bacd564ccdc9890c1847b433596a2f2fd723ba85a
3
  size 269317