Image generated by mayonays_on_toast - Sauce |
L3-Super-Nova-RP-8B
This is a role-playing model designed with the goal of good creativity and intelligence to improve advance role-playing experiences. The aim of L3-Super-Nova-RP-8B is to be good at Chain-of-Thoughts, summarizing information, and recognizing emotions. It also includes data about the human body and mind in an attempt to enhance understanding and interaction within role-playing scenarios.
The model was developed using various methods in multiple merging steps. To boost creativity, it used techniques to strengthen and adjust its output which was paried with the newly released merge method. All merge calculations were done in float32 format and then converted to the usual bfloat16 during merging.
Presets
Text Gen
The current good starting preset for this model: Nova(Ooba Only)
Context/Instruct
Virt-io's SillyTavern Presets work really well with this.
Usage Info
Some of the INT models were chosen with some of SillyTavern's features in mind, such as emotion based sprites, dynamic music, and pretty much any feature, extension, or STscript that uses sumarization. With that said, it's recommended to use SillyTavern as your front-end.
While not required, I'd recommend building the story string prompt with Lorebooks rather than using the Advance Formatting menu. The only thing you really need in the Story String prompt within Advance Formatting is the system prompt. Doing it this way tends to keep the character more consistent as the RP goes on as all character card info is locked to a certain depth rather than getting further and further away within the context.
Quants
GGUF:
- Static GGUFs by mradermacher
- Imatrix GGUFs by mradermacher
Exl2:
- 8.0bpw-h8 Exl2 by Slvcxc
Merge Info
The merge methods used were Ties, Dare Ties, Breadcrumbs Ties, SLERP, and DELLA.
The model was finished off with both Merge Densification, and Negative Weighting techniques to boost creativity.
All merging steps had the merge calculations done in float32 and were output as bfloat16.
Models Merged
The following models were used to make this merge:
- nothingiisreal/L3-8B-Celeste-v1
- Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
- Sao10K/L3-8B-Stheno-v3.2
- ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
- Sao10K/L3-8B-Lunaris-v1
- turboderp/llama3-turbcat-instruct-8b
- ChaoticNeutrals/Domain-Fusion-L3-8B
- migtissera/Llama-3-8B-Synthia-v3.5
- TheDrummer/Llama-3SOME-8B-v2
- ChaoticNeutrals/Hathor_RP-v.01-L3-8B
- TheSkullery/llama-3-cat-8b-instruct-v1
- FPHam/L3-8B-Everything-COT
- Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged
- OEvortex/Emotional-llama-8B
- lighteternal/Llama3-merge-biomed-8b
- Casual-Autopsy/Llama3-merge-psychotherapy-8b
- Sao10K/L3-8B-Tamamo-v1
- ResplendentAI/Nymph_8B
- ChaoticNeutrals/T-900-8B
- Sao10K/L3-8B-Niitama-v1
- bluuwhale/L3-SthenoMaidBlackroot-8B-V1
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- Hastagaras/Halu-8B-Llama3-Blackroot
- crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
Evaluation Results
Open LLM Leaderboard
Explaination for AI RP newbies: IFEval is the most important evaluation for RP AIs as it determines how well it can follow OOC, Lorebooks, and most importantly character cards. The rest don't matter. At least not nearly as much as IFEval.
Metric | Value |
---|---|
Avg. | N/A |
IFEval (0-Shot) | N/A |
BBH (3-Shot) | N/A |
MATH Lvl 5 (4-Shot) | N/A |
GPQA (0-shot) | N/A |
MuSR (0-shot) | N/A |
MMLU-PRO (5-shot) | N/A |
UGI Leaderboard
Information about the metrics can be found at the bottom of the UGI Leaderboard in the respective tabs.
Metric(UGI-Leaderboard) | Value | Value | Metric(Writing Style) |
---|---|---|---|
UGI(Avg.) | 23.56 | 0.199 | RegV1 |
W/10 | 5.8 | 0.218 | RegV2 |
Unruly | 22.5 | 0.15 | MyScore |
Internet | 11.8 | 8.34 | ASSS |
Stats | 18.7 | 10.26 | SMOG |
Writing | 31.5 | 1.76 | Yule |
PolContro | 33.3 |
Secret Sauce
The following YAML configs were used to make this merge.
Super-Nova-CRE_pt.1
models:
- model: nothingiisreal/L3-8B-Celeste-v1
- model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
parameters:
density: [0.35, 0.45, 0.5, 0.55, 0.65, 0.55, 0.5, 0.45, 0.35]
weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495]
- model: Sao10K/L3-8B-Stheno-v3.2
parameters:
density: [0.65, 0.55, 0.5, 0.45, 0.35, 0.45, 0.5, 0.55, 0.65]
weight: [0.165, 0.495, 0.495, 0.165, 0.165, 0.495, 0.495, 0.165]
merge_method: dare_ties
base_model: nothingiisreal/L3-8B-Celeste-v1
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
Super-Nova-CRE_pt.2
models:
- model: nothingiisreal/L3-8B-Celeste-v1
- model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
parameters:
density: [0.35, 0.45, 0.5, 0.55, 0.65, 0.55, 0.5, 0.45, 0.35]
weight: [0.165, 0.495, 0.495, 0.165, 0.165, 0.495, 0.495, 0.165]
- model: Sao10K/L3-8B-Lunaris-v1
parameters:
density: [0.65, 0.55, 0.5, 0.45, 0.35, 0.45, 0.5, 0.55, 0.65]
weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495]
merge_method: dare_ties
base_model: nothingiisreal/L3-8B-Celeste-v1
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
Super-Nova-UNC_pt.1
models:
- model: turboderp/llama3-turbcat-instruct-8b
- model: ChaoticNeutrals/Domain-Fusion-L3-8B
parameters:
density: 0.5
weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495]
- model: migtissera/Llama-3-8B-Synthia-v3.5
parameters:
density: 0.5
weight: [0.165, 0.495, 0.495, 0.165, 0.165, 0.495, 0.495, 0.165]
merge_method: dare_ties
base_model: turboderp/llama3-turbcat-instruct-8b
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
Super-Nova-UNC_pt.2
models:
- model: turboderp/llama3-turbcat-instruct-8b
- model: TheDrummer/Llama-3SOME-8B-v2
parameters:
density: 0.5
weight: [0.165, 0.495, 0.495, 0.165, 0.165, 0.495, 0.495, 0.165]
- model: ChaoticNeutrals/Hathor_RP-v.01-L3-8B
parameters:
density: 0.5
weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495]
merge_method: dare_ties
base_model: turboderp/llama3-turbcat-instruct-8b
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
Super-Nova-INT_pt.1
models:
- model: TheSkullery/llama-3-cat-8b-instruct-v1
- model: FPHam/L3-8B-Everything-COT
parameters:
density: 0.5
weight: [0.139, 0.139, 0.208, 0.139, 0.208]
- model: Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged
parameters:
density: 0.5
weight: [0.139, 0.208, 0.139, 0.208, 0.139]
- model: OEvortex/Emotional-llama-8B
parameters:
density: 0.5
weight: [0.208, 0.139, 0.208, 0.139, 0.139]
- model: lighteternal/Llama3-merge-biomed-8b
parameters:
density: 0.5
weight: [0.208, 0.139, 0.139, 0.139, 0.208]
- model: Casual-Autopsy/Llama3-merge-psychotherapy-8b
parameters:
density: 0.5
weight: [0.139, 0.208, 0.139, 0.208, 0.139]
merge_method: ties
base_model: TheSkullery/llama-3-cat-8b-instruct-v1
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
Super-Nova-INT_pt.2
models:
- model: TheSkullery/llama-3-cat-8b-instruct-v1
- model: FPHam/L3-8B-Everything-COT
parameters:
density: 0.9
gamma: 0.01
weight: [0.139, 0.208, 0.208, 0.139, 0.139]
- model: Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged
parameters:
density: 0.9
gamma: 0.01
weight: [0.208, 0.139, 0.139, 0.139, 0.208]
- model: OEvortex/Emotional-llama-8B
parameters:
density: 0.9
gamma: 0.01
weight: [0.139, 0.139, 0.208, 0.208, 0.139]
- model: lighteternal/Llama3-merge-biomed-8b
parameters:
density: 0.9
gamma: 0.01
weight: [0.139, 0.208, 0.139, 0.208, 0.139]
- model: Casual-Autopsy/Llama3-merge-psychotherapy-8b
parameters:
density: 0.9
gamma: 0.01
weight: [0.208, 0.139, 0.139, 0.139, 0.208]
merge_method: breadcrumbs_ties
base_model: TheSkullery/llama-3-cat-8b-instruct-v1
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
Super-Nova-CRE
models:
- model: Casual-Autopsy/Super-Nova-CRE_pt.1
- model: Casual-Autopsy/Super-Nova-CRE_pt.2
merge_method: slerp
base_model: Casual-Autopsy/Super-Nova-CRE_pt.1
parameters:
t:
- filter: self_attn
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
- filter: mlp
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
- value: 0.5
embed_slerp: true
dtype: float32
out_dtype: bfloat16
Super-Nova-UNC
models:
- model: Casual-Autopsy/Super-Nova-UNC_pt.1
- model: Casual-Autopsy/Super-Nova-UNC_pt.2
merge_method: slerp
base_model: Casual-Autopsy/Super-Nova-UNC_pt.1
parameters:
t:
- value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
embed_slerp: true
dtype: float32
out_dtype: bfloat16
Super-Nova-INT
models:
- model: Casual-Autopsy/Super-Nova-INT_pt.1
- model: Casual-Autopsy/Super-Nova-INT_pt.2
merge_method: slerp
base_model: Casual-Autopsy/Super-Nova-INT_pt.1
parameters:
t:
- value: 0.5
embed_slerp: true
dtype: float32
out_dtype: bfloat16
Super-Nova-RP_stp.1
models:
- model: Casual-Autopsy/Super-Nova-CRE
- model: asual-Autopsy/Super-Nova-UNC
merge_method: slerp
base_model: Casual-Autopsy/Super-Nova-CRE
parameters:
t:
- value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
embed_slerp: true
dtype: float32
out_dtype: bfloat16
Super-Nova-RP_stp.2
models:
- model: Casual-Autopsy/Super-Nova-RP_stp.1
- model: Casual-Autopsy/Super-Nova-INT
merge_method: slerp
base_model: Casual-Autopsy/Super-Nova-RP_stp.1
parameters:
t:
- value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
embed_slerp: true
dtype: float32
out_dtype: bfloat16
Super-Nova-RP_pt.1
models:
- model: Casual-Autopsy/Super-Nova-RP_stp.2
- model: Sao10K/L3-8B-Tamamo-v1
parameters:
density: [0.4, 0.6, 0.5, 0.6, 0.4]
epsilon: [0.15, 0.15, 0.25, 0.15, 0.15]
lambda: 0.85
weight: [-0.01523, 0.01768, -0.01384, 0.01835, -0.01247]
- model: ResplendentAI/Nymph_8B
parameters:
density: [0.65, 0.35, 0.5, 0.35, 0.65]
epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
lambda: 0.85
weight: [0.01823, -0.01647, 0.01422, -0.01975, 0.01128]
- model: ChaoticNeutrals/T-900-8B
parameters:
density: [0.35, 0.65, 0.5, 0.65, 0.35]
epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
lambda: 0.85
weight: [-0.01891, 0.01554, -0.01325, 0.01791, -0.01458]
- model: Sao10K/L3-8B-Niitama-v1
parameters:
density: [0.6, 0.4, 0.5, 0.4, 0.6]
epsilon: [0.15, 0.15, 0.25, 0.15, 0.15]
lambda: 0.85
weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421]
merge_method: della
base_model: Casual-Autopsy/Super-Nova-RP_stp.2
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
Super-Nova-RP_pt.2
models:
- model: Casual-Autopsy/Super-Nova-RP_stp.2
- model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
parameters:
density: [0.4, 0.6, 0.5, 0.6, 0.4]
epsilon: [0.15, 0.15, 0.25, 0.15, 0.15]
lambda: 0.85
weight: [-0.01935, 0.01785, -0.01512, 0.01809, -0.01371]
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
parameters:
density: [0.65, 0.35, 0.5, 0.35, 0.65]
epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
lambda: 0.85
weight: [0.01847, -0.01468, 0.01503, -0.01822, 0.01459]
- model: Hastagaras/Halu-8B-Llama3-Blackroot
parameters:
density: [0.35, 0.65, 0.5, 0.65, 0.35]
epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
lambda: 0.85
weight: [-0.01578, 0.01821, -0.01753, 0.01677, -0.01442]
- model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
parameters:
density: [0.6, 0.5, 0.5, 0.5, 0.6]
epsilon: [0.15, 0.15, 0.25, 0.15, 0.15]
lambda: 0.85
weight: [0.01667, -0.01740, 0.01560, -0.01564, 0.01315]
merge_method: della
base_model: Casual-Autopsy/Super-Nova-RP_stp.2
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
L3-Super-Nova-RP-8B
models:
- model: Casual-Autopsy/Super-Nova-RP_pt.1
- model: Casual-Autopsy/Super-Nova-RP_pt.2
merge_method: slerp
base_model: Casual-Autopsy/Super-Nova-RP_pt.1
parameters:
t:
- value: 0.5
dtype: float32
out_dtype: bfloat16
- Downloads last month
- 702