Darkening and Horrifying other models
Hi David,
Do you take requests for darkening or horrifying other models ?
Hey;
This all depends on available donor model(s), size of final model and other considerations.
One of the easiest (but least effective) is using a NEO Imatrix Horror dataset - this can be applied to any model.
See:
https://huggingface.co/DavidAU/Llama-3.2-1B-Instruct-NEO-WEE-HORROR-GGUF
and
https://huggingface.co/DavidAU/Command-R-01-Ultra-NEO-DARK-HORROR-V1-V2-35B-IMATRIX-GGUF
However models like "L3-Stheno-Maid-Blackroot-Grand-HORROR-16B" (and others in this series) are a more involved process.
Did you have a specific model or models you wanted to "horrify" ?
You may want to consider contacting the model makers of the models used in "L3-Stheno-Maid-Blackroot-Grand-HORROR-16B" as they may able
to assist you better and provide a "fine tune" of the model(s) you want "horrified" so to speak.
I'm actually more interested in "darkening" models than "horrifying" them, the dark models are way better than the horror ones.
I think you have something very special here.
I'm not sure how much you tested them , but as creative writing goes your modded models are second to none.
I'm not even using them for really dark or horror stuff, just creative writing.
Also you have several models of dark and horror, which yield different quality levels.
Is this darkening for example the same for all the dark models or different, for example is it a standard darkening or a SpinFire and a Ring World darkening which are different processes ?
To give two examples of models to be darkened:
https://huggingface.co/mergekit-community/Moist_Theia_21B which is a merge I made
Here are GGUFS if you want to test drive it, mrademacher also made imatrix versions of it
https://huggingface.co/mradermacher/Moist_Theia_21B-GGUF
Another one would be this:
https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated
GGUFs:
https://huggingface.co/QuantFactory/NeuralDaredevil-8B-abliterated-GGUF
What are the requirements for the parameters you mentioned here:
This all depends on available donor model(s), size of final model and other considerations.
P.S. Have you tried to do both the darken and horrify mods at the same time to a model ?
RE :Creativity - thank you ! ; that is one of my core goals.
For my models, you may want to read this new doc I made which talks about all the settings for all models to get maximum performance from them:
https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
RE: Moist_Theia_21B
These are two fine tunes (expanded first) at 70 layers.
In order to merge more model(s) with these / your "merge model" must first be up scaled to 70 layers then a number of merge processes done to "darken" the model.
REASON: Mergekit requires models to have same layers per model when merging.
The alternative is IMATRIX with a "horror dataset" (this darkens the model).
The first process will be far stronger (and more trial and error) than the second process.
Roughly on a scale of 1-10 , "Imatrix" would be a 1 or 2 (maybe 3); whereas a merge process would be 7-10+ ; 10 being highest (but far more involved).
RE: NeuralDaredevil-8B-abliterated
This is a standard 8B and there are a lot of models than can be merged with it to darken it ; likewise Imatrix horror dataset can be used to darken it too.
The merge process is easier here because of the number of layers have not been changed. (from other 8Bs)
RE: Ringworld / Spinfire
Ringworld is a "capture" of DARK PLANET 8B, but at F32. Sometimes capturing a model merge at higher bits results in a different to very different model.
Spinfire is a "light merge" of Dark Planet's core models, with a L 3.1 model ("Llama-3.1-8B-Lexi-Uncensored-V2") to "de-censor" it.
The Spinfire process can be used to "darken" a model too.
Technically Ringworld and Spinfire can then have an Imatrix Horror dataset applied to them to further darken them.
Before darkening Moist_Theia-21B you may want to try first:
Modifying: t: [0, 0.5, 1, 0.5, 0]
In terms of numbers for each and EXPANDING the number dataset from 5 to 10, 15, 20,... right up to 70 (should be a number you can equally divide into 70).
Likewise change the merge type to DARE TIES, TASK ART, Breadcrumbs etc etc. (you would use "weights" for these ; but other parameter are also available)
Start with .5 for weight ; then use a "dataset" ( [x,y,z...]) for more precise control.
Each one will give you a different "end model".
If you want me to create an Imatrix version of either model you mentioned, say the word and I can upload it / them.
I will review operation/performance of Moist_Theia-21B in the meantime, in a day or two.
I not a coder / dev so it would would be really nice if you could do it.
Moist_Theia-21B would be my choice ideally darkened in the style of DARKEST-PLANET-16.5B or Dark-Planet-8B if it's possible.
Thank You !
David, where do you get these creative writing datasets for horror etc?
@SzilviaB
I will download/test and see what happens with Imatrix NEO Horror , can't you a ETA here - likely next week at the soonest.
This is a process of download GGUF/TEST -> Then source -> Quant F16 -> Create an imatrix dataset, quant IMAT -> Test ... upload (and page info at my repo).
If the core models of your merge are in line with some of my current work I may take this further (IE the complex "darkening"), however the time commitment jumps here.
@DazzlingXeno
These are created by me.
In the case of the horror datasets, Grand Horror 16B was used with 90 prompts (horror based) and the answers filtered (best only), recorded and compressed into a text file.
The SI FI / general (1st NEO class dataset) is a lot more involved.
Dataset construction was based on testing a lot of different imatrix datasets and then testing (trial / error) how to construct a dataset to have maximum impact on a model (weights).
P.S. If it's not an issue for you I would prefer normal quants instead of iMatrix.