GGUF
chat
Inference Endpoints
conversational
Delta-Vector commited on
Commit
08004c7
1 Parent(s): 0183298

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +170 -0
README.md ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: agpl-3.0
3
+ tags:
4
+ - chat
5
+ datasets:
6
+ - NewEden/OpenCAI-ShareGPT
7
+ - NewEden/Roleplay-Logs-Sharegpt-Ngram-cleaned
8
+ - HuggingFaceH4/ultrafeedback_binarized
9
+ License: agpl-3.0
10
+ Language:
11
+ - En
12
+ Pipeline_tag: text-generation
13
+ Base_model: arcee-ai/Llama-3.1-SuperNova-Lite
14
+ Tags:
15
+ - Chat
16
+ ---
17
+
18
+ ---
19
+ ### exl2 quant (measurement.json in main branch)
20
+ ---
21
+ ### check revisions for quants
22
+ ---
23
+
24
+ An experimental finetune based on the Llama3.1 8B Supernova with it's primary goal to be "Short and Sweet" as such, i finetuned the model for 2 epochs on OpenCAI Sharegpt converted dataset and the RP-logs datasets in a effort to achieve this, This version of Control has been finetuned with DPO to help improve the smart's and coherency which was a flaw i noticed in the previous model.
25
+
26
+ # Quants
27
+
28
+ GGUF: https://huggingface.co/Delta-Vector/Control-8B-V1.1-GGUF/
29
+
30
+ EXL2 (Thanks Lucy <3) : https://huggingface.co/Delta-Vector/Control-8B-V1.1-EXL2
31
+
32
+
33
+ ## Prompting
34
+ Model has been tuned with the LLama-Instruct formatting. A typical input would look like this:
35
+
36
+ ```py
37
+ """<|begin_of_text|><|start_header_id|>system<|end_header_id|>
38
+ You are an AI built to rid the world of bonds and journeys!<|eot_id|><|start_header_id|>user<|end_header_id|>
39
+ Bro i just wanna know what is 2+2?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
40
+ """
41
+
42
+ ```
43
+
44
+ *Also note that ChatML may work as well, and might change how the model feels. while still being coherent and stable*
45
+
46
+ ## System Prompting
47
+
48
+ I would highly recommend using either Euryale's system prompt or the EVA system prompt with the model.
49
+
50
+ <details><summary>See Sao10k's Euryale System Prompt</summary>
51
+
52
+ ```
53
+ Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.
54
+ <Guidelines>
55
+ • Maintain the character persona but allow it to evolve with the story.
56
+ • Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.
57
+ • All types of outputs are encouraged; respond accordingly to the narrative.
58
+ • Include dialogues, actions, and thoughts in each response.
59
+ • Utilize all five senses to describe scenarios within {{char}}'s dialogue.
60
+ • Use emotional symbols such as "!" and "~" in appropriate contexts.
61
+ • Incorporate onomatopoeia when suitable.
62
+ • Allow time for {{user}} to respond with their own input, respecting their agency.
63
+ • Act as secondary characters and NPCs as needed, and remove them when appropriate.
64
+ • When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.
65
+ </Guidelines>
66
+
67
+ <Forbidden>
68
+ • Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.
69
+ • Writing for, speaking, thinking, acting, or replying as {{user}} in your response.
70
+ • Repetitive and monotonous outputs.
71
+ • Positivity bias in your replies.
72
+ • Being overly extreme or NSFW when the narrative context is inappropriate.
73
+ </Forbidden>
74
+
75
+ Follow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.
76
+
77
+ ```
78
+ </details><br>
79
+
80
+ <details><summary>See EVA System Prompt</summary>
81
+
82
+ ```
83
+ A fictional, narrative-driven role-play emphasizing versatility and UNCENSORED content. Adherence to the `Role-playing Guidelines` is mandatory. Refer to the `Role-play Context` for accurate information.\n\n\n
84
+
85
+ <!-- Start of Role-playing Guidelines -->
86
+
87
+ ### Narration
88
+ Concise Descriptions: Keep narration short and to the point, avoiding redundant unnecessary details. Use a dynamic and varied vocabulary for impact.
89
+ Complementary Role: Use narration to complement dialogue and action, not overshadow them.
90
+ Avoid Repetition: Ensure narration does not repeat information already conveyed through dialogue or action.
91
+
92
+ ### Narrative Consistency
93
+ Continuity: Adhere to established story elements, expanding without contradicting previous details.\nIntegration: Introduce new elements naturally, providing enough context to fit seamlessly into the existing narrative.
94
+
95
+ ### Character Embodiment
96
+ Analysis: Examine the context, subtext, and implications of the given information to gain a deeper understandings of the characters'.
97
+ Reflection: Take time to consider the situation, characters' motivations, and potential consequences.
98
+ Authentic Portrayal: Bring characters to life by consistently and realistically portraying their unique traits, thoughts, emotions, appearances, physical sensations, speech patterns, and tone. Ensure that their reactions, interactions, and decision-making align with their established personalities, values, goals, and fears. Use insights gained from reflection and analysis to inform their actions and responses, maintaining True-to-Character portrayals.
99
+
100
+ <!-- End of Role-playing Guidelines -->
101
+
102
+ </details><br>
103
+
104
+ ### Narration
105
+ Concise Descriptions: Keep narration short and to the point, avoiding redundant unnecessary details. Use a dynamic and varied vocabulary for impact.
106
+ Complementary Role: Use narration to complement dialogue and action, not overshadow them.
107
+ Avoid Repetition: Ensure narration does not repeat information already conveyed through dialogue or action.
108
+
109
+ ### Narrative Consistency
110
+ Continuity: Adhere to established story elements, expanding without contradicting previous details.\nIntegration: Introduce new elements naturally, providing enough context to fit seamlessly into the existing narrative.
111
+
112
+ ### Character Embodiment
113
+ Analysis: Examine the context, subtext, and implications of the given information to gain a deeper understandings of the characters'.
114
+ Reflection: Take time to consider the situation, characters' motivations, and potential consequences.
115
+ Authentic Portrayal: Bring characters to life by consistently and realistically portraying their unique traits, thoughts, emotions, appearances, physical sensations, speech patterns, and tone. Ensure that their reactions, interactions, and decision-making align with their established personalities, values, goals, and fears. Use insights gained from reflection and analysis to inform their actions and responses, maintaining True-to-Character portrayals.
116
+
117
+ <!-- End of Role-playing Guidelines -->",
118
+ ```
119
+ </details><br>
120
+
121
+ ## Unsloth config
122
+
123
+ <details><summary>See Unsloth Trainer config</summary>
124
+
125
+ ```yaml
126
+ dpo_trainer = DPOTrainer(
127
+ model = model,
128
+ ref_model = None,
129
+ args = DPOConfig(
130
+ per_device_train_batch_size = 1,
131
+ gradient_accumulation_steps = 8,
132
+ warmup_ratio = 0.1,
133
+ num_train_epochs = 2,
134
+ learning_rate = 5e-6,
135
+ fp16 = not is_bfloat16_supported(),
136
+ bf16 = is_bfloat16_supported(),
137
+ logging_steps = 1,
138
+ optim = "adamw_8bit",
139
+ weight_decay = 0.02,
140
+ lr_scheduler_type = "linear",
141
+ seed = 42,
142
+ output_dir = "outputs",
143
+ report_to = "none", # Use this for WandB etc
144
+ ),
145
+ beta = 0.1,
146
+ train_dataset = raw_datasets["train"],
147
+ # eval_dataset = raw_datasets["test"],
148
+ tokenizer = tokenizer,
149
+ max_length = 1024,
150
+ max_prompt_length = 512,
151
+ )
152
+ ```
153
+
154
+ </details><br>
155
+
156
+ ## Credits
157
+
158
+ Thank you to [Lucy Knada](https://huggingface.co/lucyknada), [CelineDion](https://huggingface.co/CelineDion), [Intervitens](https://huggingface.co/intervitens), [Kalomaze](https://huggingface.co/kalomaze), [Kubernetes Bad](https://huggingface.co/kubernetes-bad) and the rest of [Anthracite](https://huggingface.co/anthracite-org) (But not Alpin.)
159
+
160
+
161
+ ## Training
162
+ The training was done for 2 epochs. We used 4 x [RTX 3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3090-3090ti/) GPUs graciously provided by [Intervitens](https://huggingface.co/intervitens) for the full-parameter fine-tuning of the model, After which DPO tuning was on 1 x [Nvidia T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/)
163
+
164
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
165
+
166
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/made%20with%20unsloth.png" alt="Made with Unsloth" width="200" height="32"/>](https://github.com/unslothai/unsloth)
167
+
168
+ ## Safety
169
+
170
+ Nein.