Twizzler-7B
I tried to expand Erosumika with more abilities while keeping her brain intact.
The first key to this was to inject a small amount of a highly volatile Holoviolet test merge I made earlier, which is itself a mix of the very creative but unhinged Holodeck and a smart model by Greennode that I enjoyed. The other special ingredient is Nexus-IKM which was trained on an internal knowledge map dataset that makes its line of reasoning often noticeably different from other mistral tunes. It balances out the inconsistencies of Holoviolet while adding more creativity and logic at the same time. Finally, I mixed in some base Mistral-7B-v0.2 for higher context support and more intelligence. I went with the non-instruct version because I felt this merge should focus more on story writing capabilities than prompt following and I wanted to avoid GPT-isms like bonds and journeys as much as possible.
All in all this merge has a very distinct writing style that focuses less on flowery language and more on interesting ideas and interactions. It can go off the deep end and make lots of stupid mistakes sometimes, but it can also output some really good stuff if you're lucky.
Prompts and settings
I recommend simple formats like Alpaca and not giving it too many instructions to get confused by. It is a 7B after all.
As for settings, I enjoy using dynamic temperature 1 to 5 with a min P of 0.1 and 0.95 typical P.
Details
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using alpindale/Mistral-7B-v0.2-hf as a base.
Models Merged
The following models were included in the merge:
- Severian/Nexus-IKM-Mistral-Instruct-v0.2-7B
- son-of-man/HoloViolet-7B-test3
- localfultonextractor/Erosumika-7B-v3
Configuration
The following YAML configuration was used to produce this model:
base_model:
model:
path: alpindale/Mistral-7B-v0.2-hf
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: alpindale/Mistral-7B-v0.2-hf
parameters:
weight: 0.3
- layer_range: [0, 32]
model:
model:
path: son-of-man/HoloViolet-7B-test3
parameters:
weight: 0.2
- layer_range: [0, 32]
model:
model:
path: localfultonextractor/Erosumika-7B-v3
parameters:
weight: 0.3
- layer_range: [0, 32]
model:
model:
path: Severian/Nexus-IKM-Mistral-Instruct-v0.2-7B
parameters:
weight: 0.2
- Downloads last month
- 13