Update README.md
Browse files
README.md
CHANGED
@@ -1,19 +1,95 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
3 |
tags:
|
4 |
-
-
|
5 |
-
|
6 |
-
|
7 |
-
results: []
|
8 |
---
|
9 |
|
10 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
11 |
-
should probably proofread and complete it, then remove this comment. -->
|
12 |
|
13 |
-
[
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
<details><summary>See axolotl config</summary>
|
15 |
|
16 |
-
axolotl version: `0.4.1`
|
17 |
```yaml
|
18 |
base_model: /workspace/data/gemma-2-9b-chatml
|
19 |
model_type: AutoModelForCausalLM
|
@@ -32,7 +108,7 @@ load_in_4bit: false
|
|
32 |
strict: false
|
33 |
|
34 |
datasets:
|
35 |
-
- path: anthracite-
|
36 |
type: sharegpt
|
37 |
conversation: chatml
|
38 |
- path: NewEden/Claude-Instruct-5K
|
@@ -118,53 +194,29 @@ fsdp:
|
|
118 |
fsdp_config:
|
119 |
special_tokens:
|
120 |
pad_token: <pad>
|
121 |
-
|
122 |
```
|
123 |
-
|
124 |
</details><br>
|
125 |
|
126 |
-
|
127 |
-
|
128 |
-
This model was trained from scratch on the None dataset.
|
129 |
-
|
130 |
-
## Model description
|
131 |
-
|
132 |
-
More information needed
|
133 |
-
|
134 |
-
## Intended uses & limitations
|
135 |
-
|
136 |
-
More information needed
|
137 |
-
|
138 |
-
## Training and evaluation data
|
139 |
-
|
140 |
-
More information needed
|
141 |
-
|
142 |
-
## Training procedure
|
143 |
-
|
144 |
-
### Training hyperparameters
|
145 |
-
|
146 |
-
The following hyperparameters were used during training:
|
147 |
-
- learning_rate: 1e-05
|
148 |
-
- train_batch_size: 1
|
149 |
-
- eval_batch_size: 1
|
150 |
-
- seed: 42
|
151 |
-
- distributed_type: multi-GPU
|
152 |
-
- num_devices: 8
|
153 |
-
- gradient_accumulation_steps: 4
|
154 |
-
- total_train_batch_size: 32
|
155 |
-
- total_eval_batch_size: 8
|
156 |
-
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
157 |
-
- lr_scheduler_type: cosine
|
158 |
-
- lr_scheduler_warmup_steps: 10
|
159 |
-
- num_epochs: 4
|
160 |
|
161 |
-
|
162 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
163 |
|
|
|
|
|
164 |
|
165 |
-
|
166 |
|
167 |
-
|
168 |
-
|
169 |
-
- Datasets 2.21.0
|
170 |
-
- Tokenizers 0.19.1
|
|
|
1 |
---
|
2 |
+
license: gemma
|
3 |
+
language:
|
4 |
+
- en
|
5 |
tags:
|
6 |
+
- chat
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
library_name: transformers
|
|
|
9 |
---
|
10 |
|
|
|
|
|
11 |
|
12 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/vxYDYerLy2vD8n05nL2WU.png)
|
13 |
+
|
14 |
+
|
15 |
+
This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
|
16 |
+
|
17 |
+
|
18 |
+
|
19 |
+
This model is fine-tuned on top of [gemma 2 9b (chatML'ified)](https://huggingface.co/IntervitensInc/gemma-2-9b-chatml).
|
20 |
+
|
21 |
+
## Prompting
|
22 |
+
A typical input would look like this:
|
23 |
+
|
24 |
+
```py
|
25 |
+
<|im_start|>system
|
26 |
+
system prompt<|im_end|>
|
27 |
+
<|im_start|>user
|
28 |
+
Hi there!<|im_end|>
|
29 |
+
<|im_start|>assistant
|
30 |
+
Nice to meet you!<|im_end|>
|
31 |
+
<|im_start|>user
|
32 |
+
Can I ask a question?<|im_end|>
|
33 |
+
<|im_start|>assistant
|
34 |
+
```
|
35 |
+
|
36 |
+
## SillyTavern templates
|
37 |
+
|
38 |
+
Below are Instruct and Context templates for use within SillyTavern.
|
39 |
+
|
40 |
+
<details><summary>context template</summary>
|
41 |
+
|
42 |
+
```yaml
|
43 |
+
{
|
44 |
+
"story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
|
45 |
+
"example_separator": "",
|
46 |
+
"chat_start": "",
|
47 |
+
"use_stop_strings": false,
|
48 |
+
"allow_jailbreak": false,
|
49 |
+
"always_force_name2": true,
|
50 |
+
"trim_sentences": false,
|
51 |
+
"include_newline": false,
|
52 |
+
"single_line": false,
|
53 |
+
"name": "Magnum ChatML"
|
54 |
+
}
|
55 |
+
```
|
56 |
+
|
57 |
+
</details><br>
|
58 |
+
<details><summary>instruct template</summary>
|
59 |
+
|
60 |
+
```yaml
|
61 |
+
{
|
62 |
+
"system_prompt": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as "!" and "~" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
|
63 |
+
"input_sequence": "<|im_start|>user\n",
|
64 |
+
"output_sequence": "<|im_start|>assistant\n",
|
65 |
+
"last_output_sequence": "",
|
66 |
+
"system_sequence": "<|im_start|>system\n",
|
67 |
+
"stop_sequence": "<|im_end|>",
|
68 |
+
"wrap": false,
|
69 |
+
"macro": true,
|
70 |
+
"names": true,
|
71 |
+
"names_force_groups": true,
|
72 |
+
"activation_regex": "",
|
73 |
+
"system_sequence_prefix": "",
|
74 |
+
"system_sequence_suffix": "",
|
75 |
+
"first_output_sequence": "",
|
76 |
+
"skip_examples": false,
|
77 |
+
"output_suffix": "<|im_end|>\n",
|
78 |
+
"input_suffix": "<|im_end|>\n",
|
79 |
+
"system_suffix": "<|im_end|>\n",
|
80 |
+
"user_alignment_message": "",
|
81 |
+
"system_same_as_user": false,
|
82 |
+
"last_system_sequence": "",
|
83 |
+
"name": "Magnum ChatML"
|
84 |
+
}
|
85 |
+
```
|
86 |
+
|
87 |
+
</details><br>
|
88 |
+
|
89 |
+
## Axolotl config
|
90 |
+
|
91 |
<details><summary>See axolotl config</summary>
|
92 |
|
|
|
93 |
```yaml
|
94 |
base_model: /workspace/data/gemma-2-9b-chatml
|
95 |
model_type: AutoModelForCausalLM
|
|
|
108 |
strict: false
|
109 |
|
110 |
datasets:
|
111 |
+
- path: anthracite-org/c2_logs_16k_llama_v1.1
|
112 |
type: sharegpt
|
113 |
conversation: chatml
|
114 |
- path: NewEden/Claude-Instruct-5K
|
|
|
194 |
fsdp_config:
|
195 |
special_tokens:
|
196 |
pad_token: <pad>
|
|
|
197 |
```
|
|
|
198 |
</details><br>
|
199 |
|
200 |
+
## Credits
|
201 |
+
We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
202 |
|
203 |
+
We would also like to thank all members of Anthracite who made this finetune possible.
|
204 |
|
205 |
+
## Datasets
|
206 |
+
- [anthracite-core/c2_logs_16k_llama_v1.1](https://huggingface.co/datasets/anthracite-core/c2_logs_16k_llama_v1.1)
|
207 |
+
- [NewEden/Claude-Instruct-5K](https://huggingface.co/datasets/NewEden/Claude-Instruct-5K)
|
208 |
+
- [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
|
209 |
+
- [Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned)
|
210 |
+
- [lodrick-the-lafted/kalo-opus-instruct-3k-filtered](https://huggingface.co/datasets/lodrick-the-lafted/kalo-opus-instruct-3k-filtered)
|
211 |
+
- [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
|
212 |
+
- [Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned)
|
213 |
+
- [anthracite-org/kalo_opus_misc_240827](https://huggingface.co/datasets/anthracite-org/kalo_opus_misc_240827)
|
214 |
+
- [anthracite-org/kalo_misc_part2](https://huggingface.co/datasets/anthracite-org/kalo_misc_part2)
|
215 |
|
216 |
+
## Training
|
217 |
+
The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model.
|
218 |
|
219 |
+
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
220 |
|
221 |
+
## Safety
|
222 |
+
...
|
|
|
|