Delta-Vector commited on
Commit
75c07a1
·
verified ·
1 Parent(s): 21cc7d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +193 -20
README.md CHANGED
@@ -1,40 +1,213 @@
1
  ---
2
- base_model:
3
- - NewEden/MistralAI-Nemo-Instruct-ChatML
4
- - NewEden/daring-mango-r1
5
  library_name: transformers
6
  tags:
7
- - mergekit
8
- - merge
9
-
 
 
 
 
 
 
 
 
 
 
 
10
  ---
 
11
  ### exl2 quant (measurement.json in main branch)
12
  ---
13
  ### check revisions for quants
14
  ---
15
 
16
- # mag-se
17
 
18
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
19
 
20
- ## Merge Details
21
- ### Merge Method
22
 
23
- This model was merged using the passthrough merge method using [NewEden/MistralAI-Nemo-Instruct-ChatML](https://huggingface.co/NewEden/MistralAI-Nemo-Instruct-ChatML) + [NewEden/daring-mango-r1](https://huggingface.co/NewEden/daring-mango-r1) as a base.
24
 
25
- ### Models Merged
26
 
27
- The following models were included in the merge:
 
 
 
 
 
 
 
 
 
 
 
 
28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
- ### Configuration
 
 
 
 
 
 
31
 
32
- The following YAML configuration was used to produce this model:
 
 
 
 
33
 
34
  ```yaml
35
- base_model: NewEden/MistralAI-Nemo-Instruct-ChatML+NewEden/daring-mango-r1
36
- dtype: bfloat16
37
- merge_method: passthrough
38
- models:
39
- - model: NewEden/MistralAI-Nemo-Instruct-ChatML+NewEden/daring-mango-r1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  ```
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
 
4
  library_name: transformers
5
  tags:
6
+ - chat
7
+ pipeline_tag: text-generation
8
+ datasets:
9
+ - AquaV/c2-sharegpt-advanced-prefills-filtered
10
+ - AquaV/c1-sharegpt-advanced-prefills-filtered
11
+ - AquaV/rainy-sharegpt-advanced-prefills-filtered
12
+ - anthracite-core/Gryphe-Opus-Charcard-Roleplay
13
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
14
+ - lodrick-the-lafted/kalo-opus-instruct-3k-filtered
15
+ - anthracite-org/nopm_claude_writing_fixed
16
+ - anthracite-org/kalo_opus_misc_240827
17
+ - anthracite-org/kalo_misc_part2
18
+ - NewEden/Claude-Instruct-2.7K
19
+ - NewEden/Claude-Instruct-5K
20
  ---
21
+
22
  ### exl2 quant (measurement.json in main branch)
23
  ---
24
  ### check revisions for quants
25
  ---
26
 
27
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/nqMkoIsmScaTFHCFirGsc.png" width="500px" />
28
 
29
+ This is a model designed to replicate the prose quality of the Claude 3 series of models. specifically Sonnet and Opus - Made with a prototype magnum V5 datamix.
30
 
31
+ This model is fine-tuned on top of [Mistral-Nemo-Instruct(chatML'ified)](https://huggingface.co/NewEden/MistralAI-Nemo-Instruct-ChatML).
32
+ ## Quants
33
 
34
+ EXL2: https://huggingface.co/Delta-Vector/Rei-12B-EXL2
35
 
36
+ GGUF: https://huggingface.co/Delta-Vector/Rei-12B-gguf/
37
 
38
+ ## Prompting
39
+ A typical input would look like this:
40
+
41
+ ```py
42
+ """<|im_start|>user
43
+ Hi there!<|im_end|>
44
+ <|im_start|>assistant
45
+ Nice to meet you!<|im_end|>
46
+ <|im_start|>user
47
+ Can I ask a question?<|im_end|>
48
+ <|im_start|>assistant
49
+ """
50
+ ```
51
 
52
+ I would highly recommend using either Euryale's system prompt with the model.
53
+
54
+ <details><summary>See Sao10k's Euryale System Prompt</summary>
55
+
56
+ ```
57
+ Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.
58
+ <Guidelines>
59
+ • Maintain the character persona but allow it to evolve with the story.
60
+ • Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.
61
+ • All types of outputs are encouraged; respond accordingly to the narrative.
62
+ • Include dialogues, actions, and thoughts in each response.
63
+ • Utilize all five senses to describe scenarios within {{char}}'s dialogue.
64
+ • Use emotional symbols such as "!" and "~" in appropriate contexts.
65
+ • Incorporate onomatopoeia when suitable.
66
+ • Allow time for {{user}} to respond with their own input, respecting their agency.
67
+ • Act as secondary characters and NPCs as needed, and remove them when appropriate.
68
+ • When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.
69
+ </Guidelines>
70
 
71
+ <Forbidden>
72
+ • Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.
73
+ • Writing for, speaking, thinking, acting, or replying as {{user}} in your response.
74
+ • Repetitive and monotonous outputs.
75
+ • Positivity bias in your replies.
76
+ • Being overly extreme or NSFW when the narrative context is inappropriate.
77
+ </Forbidden>
78
 
79
+ </details><br>
80
+
81
+ ## Axolotl config
82
+
83
+ <details><summary>See axolotl config</summary>
84
 
85
  ```yaml
86
+ ## model
87
+ base_model: NewEden_nemo-chatml
88
+ model_type: AutoModelForCausalLM
89
+ tokenizer_type: AutoTokenizer
90
+
91
+ ## qlora COPE
92
+ load_in_8bit: false
93
+ load_in_4bit: false
94
+ strict: false
95
+
96
+ ## data
97
+ datasets:
98
+ - path: AquaV/c2-sharegpt-advanced-prefills-filtered
99
+ type: sharegpt
100
+ - path: AquaV/c1-sharegpt-advanced-prefills-filtered
101
+ type: sharegpt
102
+ - path: AquaV/rainy-sharegpt-advanced-prefills-filtered
103
+ type: sharegpt
104
+ - path: anthracite-core/Gryphe-Opus-Charcard-Roleplay
105
+ type: sharegpt
106
+ - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
107
+ type: sharegpt
108
+ - path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
109
+ type: sharegpt
110
+ - path: anthracite-org/nopm_claude_writing_fixed
111
+ type: sharegpt
112
+ - path: anthracite-org/kalo_opus_misc_240827
113
+ type: sharegpt
114
+ - path: anthracite-org/kalo_misc_part2
115
+ type: sharegpt
116
+ - path: NewEden/Claude-Instruct-2.7K
117
+ type: sharegpt
118
+ - path: NewEden/Claude-Instruct-5K
119
+ type: sharegpt
120
+ shuffle_merged_datasets: true
121
+ dataset_prepared_path: dataset_prepared
122
+ val_set_size: 0.02
123
+ output_dir: 12b-out-rslora-SE
124
+
125
+ ## LIGGER
126
+ plugins:
127
+ - axolotl.integrations.liger.LigerPlugin
128
+ liger_rope: true
129
+ liger_rms_norm: true
130
+ liger_layer_norm: true
131
+ liger_glu_activation: true
132
+ liger_fused_linear_cross_entropy: true
133
+
134
+ ## CTX settings
135
+ sequence_len: 16384
136
+ sample_packing: true
137
+ eval_sample_packing: true
138
+ pad_to_sequence_len: true
139
+
140
+ ## Lora
141
+ adapter: lora
142
+ lora_model_dir:
143
+ lora_r: 128
144
+ lora_alpha: 16
145
+ lora_dropout: 0.05
146
+ lora_target_linear: true
147
+ lora_fan_in_fan_out:
148
+ peft_use_rslora: true
149
+ lora_modules_to_save:
150
+ - embed_tokens
151
+ - lm_head
152
+
153
+ ## WandB
154
+ wandb_project: rei
155
+ wandb_entity:
156
+ wandb_watch:
157
+ wandb_name: daring-mango
158
+ wandb_log_model:
159
+
160
+ ## evals
161
+ evals_per_epoch: 4
162
+ eval_table_size:
163
+ eval_max_new_tokens: 128
164
+
165
+ ## hoe params
166
+ gradient_accumulation_steps: 4
167
+ micro_batch_size: 1
168
+ num_epochs: 2
169
+ optimizer: paged_ademamix_8bit
170
+ # optimizer: paged_adamw_8bit
171
+ lr_scheduler: cosine
172
+ learning_rate: 2.83e-5
173
+
174
+ train_on_inputs: false
175
+ group_by_length: false
176
+ bf16: auto
177
+ fp16:
178
+ tf32: false
179
+
180
+ gradient_checkpointing: unsloth
181
+ early_stopping_patience:
182
+ resume_from_checkpoint:
183
+ local_rank:
184
+ logging_steps: 1
185
+ xformers_attention:
186
+ flash_attention: true
187
+ s2_attention:
188
+
189
+ warmup_steps: 40
190
+ saves_per_epoch: 2
191
+ debug:
192
+ ## for ademiamix
193
+ deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16_cpuoffload_params.json
194
+ ## for adamw
195
+ # deepspeed: ./deepspeed_configs/zero3_bf16.json
196
+ weight_decay: 0.01
197
+ fsdp:
198
+ fsdp_config:
199
+ special_tokens:
200
+ pad_token: <pad>
201
+
202
  ```
203
+ </details><br>
204
+
205
+
206
+ ## Training
207
+ The training was done for 2 epochs. We used 4x[3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3090-3090ti/) GPUs graciously provided by @intervitens for the fine-tuning of the model.
208
+
209
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
210
+
211
+ ## Safety
212
+
213
+ But why?