aashish1904 commited on
Commit
fefc422
1 Parent(s): b889db9

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +160 -0
README.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ library_name: transformers
5
+ license: llama3.1
6
+ base_model: meta-llama/Meta-Llama-3.1-8B
7
+ tags:
8
+ - axolotl
9
+ - generated_from_trainer
10
+ model-index:
11
+ - name: MagpieLM-8B-SFT-v0.1
12
+ results: []
13
+ datasets:
14
+ - Magpie-Align/MagpieLM-SFT-Data-v0.1
15
+
16
+ ---
17
+
18
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
19
+
20
+
21
+ # QuantFactory/MagpieLM-8B-SFT-v0.1-GGUF
22
+ This is quantized version of [Magpie-Align/MagpieLM-8B-SFT-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-8B-SFT-v0.1) created using llama.cpp
23
+
24
+ # Original Model Card
25
+
26
+
27
+ ![Magpie](https://cdn-uploads.huggingface.co/production/uploads/653df1323479e9ebbe3eb6cc/FWWILXrAGNwWr52aghV0S.png)
28
+
29
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://api.wandb.ai/links/uw-nsl/3m10ah3v)
30
+
31
+ # 🐦 MagpieLM-8B-SFT-v0.1
32
+
33
+ Project Web: [https://magpie-align.github.io/](https://magpie-align.github.io/)
34
+
35
+ Arxiv Technical Report: [https://arxiv.org/abs/2406.08464](https://arxiv.org/abs/2406.08464)
36
+
37
+ Codes: [https://github.com/magpie-align/magpie](https://github.com/magpie-align/magpie)
38
+
39
+ ## About This Model
40
+
41
+ *Model full name: Llama3.1-MagpieLM-8B-SFT-v0.1*
42
+
43
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on [Magpie-Align/MagpieLM-SFT-Data-v0.1](https://huggingface.co/datasets/Magpie-Align/MagpieLM-SFT-Data-v0.1) dataset.
44
+
45
+ This is the intermediate checkpoint for fine-tuning [Magpie-Align/MagpieLM-8B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-8B-Chat-v0.1).
46
+
47
+ ## Training procedure
48
+
49
+ ### Training hyperparameters
50
+
51
+ The following hyperparameters were used during training:
52
+ - learning_rate: 2e-05
53
+ - train_batch_size: 1
54
+ - eval_batch_size: 1
55
+ - seed: 42
56
+ - distributed_type: multi-GPU
57
+ - num_devices: 4
58
+ - gradient_accumulation_steps: 32
59
+ - total_train_batch_size: 128
60
+ - total_eval_batch_size: 4
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: cosine
63
+ - lr_scheduler_warmup_steps: 51
64
+ - num_epochs: 2
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss |
69
+ |:-------------:|:------:|:----:|:---------------:|
70
+ | 0.9311 | 0.0038 | 1 | 0.9847 |
71
+ | 0.561 | 0.2015 | 53 | 0.5765 |
72
+ | 0.4843 | 0.4030 | 106 | 0.5039 |
73
+ | 0.4608 | 0.6045 | 159 | 0.4814 |
74
+ | 0.4454 | 0.8060 | 212 | 0.4678 |
75
+ | 0.4403 | 1.0075 | 265 | 0.4596 |
76
+ | 0.3965 | 1.1938 | 318 | 0.4574 |
77
+ | 0.3952 | 1.3953 | 371 | 0.4554 |
78
+ | 0.3962 | 1.5968 | 424 | 0.4547 |
79
+ | 0.3948 | 1.7983 | 477 | 0.4544 |
80
+
81
+
82
+ ### Framework versions
83
+
84
+ - Transformers 4.45.0.dev0
85
+ - Pytorch 2.3.0+cu121
86
+ - Datasets 2.19.1
87
+ - Tokenizers 0.19.1
88
+
89
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
90
+ <details><summary>See axolotl config</summary>
91
+
92
+ axolotl version: `0.4.1`
93
+ ```yaml
94
+ base_model: meta-llama/Meta-Llama-3.1-8B
95
+ model_type: LlamaForCausalLM
96
+ tokenizer_type: AutoTokenizer
97
+ chat_template: llama3
98
+
99
+ load_in_8bit: false
100
+ load_in_4bit: false
101
+ strict: false
102
+ main_process_port: 0
103
+
104
+ datasets:
105
+ - path: Magpie-Align/MagpieLM-SFT-Data-v0.1
106
+ type: sharegpt
107
+ conversation: llama3
108
+
109
+ dataset_prepared_path: last_run_prepared
110
+ val_set_size: 0.001
111
+ output_dir: axolotl_out/MagpieLM-8B-SFT-v0.1
112
+
113
+ sequence_len: 8192
114
+ sample_packing: true
115
+ eval_sample_packing: false
116
+ pad_to_sequence_len: true
117
+
118
+ wandb_project: SynDa
119
+ wandb_entity:
120
+ wandb_watch:
121
+ wandb_name: MagpieLM-8B-SFT-v0.1
122
+ wandb_log_model:
123
+ hub_model_id: Magpie-Align/MagpieLM-8B-SFT-v0.1
124
+
125
+ gradient_accumulation_steps: 32
126
+ micro_batch_size: 1
127
+ num_epochs: 2
128
+ optimizer: paged_adamw_8bit
129
+ lr_scheduler: cosine
130
+ learning_rate: 2e-5
131
+
132
+ train_on_inputs: false
133
+ group_by_length: false
134
+ bf16: auto
135
+ fp16:
136
+ tf32: false
137
+
138
+ gradient_checkpointing: true
139
+ gradient_checkpointing_kwargs:
140
+ use_reentrant: false
141
+ early_stopping_patience:
142
+ resume_from_checkpoint:
143
+ logging_steps: 1
144
+ xformers_attention:
145
+ flash_attention: true
146
+
147
+ warmup_ratio: 0.1
148
+ evals_per_epoch: 5
149
+ eval_table_size:
150
+ saves_per_epoch:
151
+ debug:
152
+ deepspeed:
153
+ weight_decay: 0.0
154
+ fsdp:
155
+ fsdp_config:
156
+ special_tokens:
157
+ pad_token: <|end_of_text|>
158
+
159
+ ```
160
+ </details><br>