Update README.md
Browse files
README.md
CHANGED
@@ -56,3 +56,73 @@ The model is available in quantized formats:
|
|
56 |
|
57 |
Meta Llama 3 Instruct Format
|
58 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
|
57 |
Meta Llama 3 Instruct Format
|
58 |
|
59 |
+
```
|
60 |
+
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
61 |
+
|
62 |
+
You are [character]. You have a personality of [personality description]. [Describe scenario]<|eot_id|><|start_header_id|>user<|end_header_id|>
|
63 |
+
|
64 |
+
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
65 |
+
|
66 |
+
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
67 |
+
|
68 |
+
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
69 |
+
```
|
70 |
+
|
71 |
+
# RPMax: Reduced repetition and higher creativity model
|
72 |
+
|
73 |
+
The goal of RPMax is to reduce repetitions and increase the models ability to creatively write in different situations presented to it. What this means is it is a model that will output responses very differently without falling into predictable tropes even in different situations.
|
74 |
+
|
75 |
+
# What is repetition and creativity?
|
76 |
+
|
77 |
+
First of all, creativity should mean the variety in output that the model is capable of creating. You should not confuse creativity with writing prose. When a model writes in a way that can be said to be pleasant like writers would write in a novel, this is not creative writing. This is just a model having a certain pleasant type of writing prose. So a model that writes nicely is not necessarily a creative model.
|
78 |
+
|
79 |
+
Repetition and creativity are essentially intertwined with each other, so if a model is repetitive then a model can also be said to be un-creative as it cannot write new things and can only repeat similar responses that it has created before. For repetition there are actually two very different forms of repetition.
|
80 |
+
|
81 |
+
**In-context repetition:** When people mention a model is repetitive, this usually mean a model that likes to repeat the same phrases in a single conversation. An example of this is when a model says that a character "flicks her hair and...." and then starts to prepend that "flicks her hair and..." into every other action that character does.
|
82 |
+
|
83 |
+
It can be said that the model is boring, but even in real people's writing it is possible that this kind of repetition could be intentional to subtly prove a point or showcase a character's traits in some scenarios. So this type of repetition is not always bad and completely discouraging a model from doing this does not always lead to improve a model's writing ability.
|
84 |
+
|
85 |
+
**Cross-context repetition:** A second arguably worse type of repetition is a model's tendency to repeat the same phrases or tropes in very different situations. An example is a model that likes to repeat the infamous "shivers down my spine" phrase in wildly different conversations that don't necessarily fit with that phrase.
|
86 |
+
|
87 |
+
This type of repetition is ALWAYS bad as it is a sign that the model has over-fitted into that style of "creative writing" that it has often seen in the training dataset. A model's tendency to have cross-context repetition is also usually visible in how a model likes to choose similar repetitive names when writing stories. Such as the infamous "elara" and "whispering woods" names.
|
88 |
+
|
89 |
+
With RPMax v1 the main goal is to create a highly creative model by reducing reducing cross-context repetition, as that is the type of repetition that follows you through different conversations. This is also a type of repetition that can be combated by making sure your dataset does not have repetitions of the same situations or characters in different example entries.
|
90 |
+
|
91 |
+
# Dataset Curation
|
92 |
+
|
93 |
+
RPMax is successful thanks to the training method and training dataset that was created for these models' fine-tuning. It contains as many open source creative writing and RP datasets that can be found (mostly from Hugging Face), from which have been curated to weed out datasets that are purely synthetic generations as they often only serve to dumb down the model and make the model learn GPT-isms (slop) rather than help.
|
94 |
+
|
95 |
+
Then Llama 3.1 8B is used to create a database of the characters and situations that are portrayed in these datasets, which is then used to de-dupe these datasets to make sure that there is only a single entry of any character or situation.
|
96 |
+
|
97 |
+
# The Golden Rule of Fine-Tuning
|
98 |
+
|
99 |
+
Unlike the initial pre-training stage where the more data you throw at it the better it becomes for the most part, the golden rule for fine-tuning models isn't quantity, but instead quality over quantity. So the dataset for RPMax is actually orders of magnitude smaller than it would be if it included repeated characters and situations in the dataset, but the end result is a model that does not feel like just another remix of any other creative writing/RP model.
|
100 |
+
|
101 |
+
# Training Parameters
|
102 |
+
|
103 |
+
RPMax's training parameters are also a different approach to other fine-tunes. The usual way is to have a low learning rate and high gradient accumulation for better loss stability, and then run multiple epochs of the training run until the loss is acceptable.
|
104 |
+
|
105 |
+
# RPMax's Unconventional Approach
|
106 |
+
|
107 |
+
RPMax, on the other hand, is only trained for one single epoch, uses a low gradient accumulation, and a higher than normal learning rate. The loss curve during training is actually unstable and jumps up and down a lot, but if you smooth it out, it is actually still steadily decreasing over time although never end up at a very low loss value. The theory is that this allows the models to learn from each individual example in the dataset much more, and by not showing the model the same example twice, it will stop the model from latching on and reinforcing a single character or story trope.
|
108 |
+
|
109 |
+
The jumping up and down of loss during training is because as the model gets trained on a new entry from the dataset, the model will have never seen a similar example before and therefore can't really predict an answer similar to the example entry. While, the relatively high end loss of 1.0 or slightly above for RPMax models is actually good because the goal was never to create a model that can output exactly like the dataset that is being used to train it. Rather to create a model that is creative enough to make up it's own style of responses.
|
110 |
+
|
111 |
+
This is different from training a model in a particular domain and needing the model to reliably be able to output like the example dataset, such as when training a model on a company's internal knowledge base.
|
112 |
+
|
113 |
+
# Difference between versions?
|
114 |
+
|
115 |
+
v1.0 had some mistakes in the training parameters, hence why not many versions of it were created.
|
116 |
+
|
117 |
+
v1.1 fixed the previous errors and is the version where many different base models were used in order to compare and figure out which models are most ideal for RPMax. The consensus is that Mistral based models were fantastic for RPMax as they are by far the most uncensored by default. On the other hand, Gemma seems to also have a quite interesting writing style, but on the other hand it had a lot of issues with running and training and the general low interest in it. Llama 3.1 based models also seem to do well, with 70B being having the lowest loss at the end of the training runs.
|
118 |
+
|
119 |
+
v1.2 was a fix of the dataset, where it was found that there are many entries that contained broken or otherwise nonsensical system prompts or messages in the example conversations. Training the model on v1.2 predictable made them better at following instructions and staying coherent.
|
120 |
+
|
121 |
+
v1.3 was not meant to be created, but due to the gradient checkpointing bug being found recently and training frameworks finally getting updated with the fix, it sounds like a good excuse to run a v1.3 of RPMax. This version is a focus on improving the training parameters, this time training was done using rsLoRA+ or rank-stabilized low rank adaptation with the addition of LoRA plus. These additions improved the models learning quite considerably, with the models all achieving lower loss than the previous iteration and outputting better quality outputs in real usage.
|
122 |
+
|
123 |
+
# Real Success?
|
124 |
+
|
125 |
+
RPMax models have been out for a few months at this point, with versions v1.0 all the way to the now new v1.3. So far it seems like RPMax have been a resounding success in that achieves it's original goal of being a new creative writing/RP model that does not write like other RP finetunes. A lot of users of it mentioned it kind of almost feels like interacting with a real person when in an RP scenario, and that it does impressively unexpected things in their stories that caught them off guard in a good way.
|
126 |
+
|
127 |
+
Is it the best model there is? Probably not, but there isn't ever one single best model. So try it out for yourself and maybe you will like it! As always any feedback on the model is always appreciated and will be taken into account for the next versions.
|
128 |
+
|