aashish1904's picture
Upload README.md with huggingface_hub
132a4da verified
---
language:
- en
license: llama3
library_name: transformers
tags:
- merge
- mergekit
- lazymergekit
- not-for-all-audiences
- nsfw
- rp
- roleplay
- role-play
base_model:
- Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
- bluuwhale/L3-SthenoMaidBlackroot-8B-V1
- Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
- Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
- tannedbum/L3-Nymeria-8B
- migtissera/Llama-3-8B-Synthia-v3.5
- Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
- tannedbum/L3-Nymeria-Maid-8B
- Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B
- aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
- Nitral-AI/Hathor_Stable-v0.2-L3-8B
- Sao10K/L3-8B-Stheno-v3.1
pipeline_tag: text-generation
model-index:
- name: L3-Umbral-Mind-RP-v2.0-8B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 71.23
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 32.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 10.12
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.92
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.55
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 30.26
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
name: Open LLM Leaderboard
---
![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
# QuantFactory/L3-Umbral-Mind-RP-v2.0-8B-GGUF
This is quantized version of [Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B) created using llama.cpp
# Original Model Card
| <img src="https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3-8B/resolve/main/63073798_p0_master1200.jpg" style="display: block; margin: auto;"> |
|:---:|
| Image by ろ47 |
| |
# Merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
***
## Merge Details
The goal of this merge was to make an RP model better suited for role-plays with heavy themes such as but not limited to:
- Mental illness
- Self-harm
- Trauma
- Suicide
I hated how RP models tended to be overly positive and hopeful with role-plays involving such themes,
but thanks to [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) this problem has been lessened considerably.
If you're an enjoyer of savior/reverse savior type role-plays like myself, then this model is for you.
***
## Usage Info
This model is meant to be used with asterisks/quotes RPing formats, any other format that isn't asterisks/quotes is likely to cause issues
***
## Quants
* [imatrix quants](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2.0-8B-i1-GGUF) by mradermacher
* [Static quants](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2.0-8B-GGUF) by mradermacher
* Exl2:
- [L3-Umbral-Mind-RP-v2.0-8B-8bpw-h8-exl2](https://huggingface.co/riveRiPH/L3-Umbral-Mind-RP-v2.0-8B-8bpw-h8-exl2) by riveRiPH
- [L3-Umbral-Mind-RP-v2.0-8B-6.3bpw-h8-exl2](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B-6.3bpw-h8-exl2) by yours truly
- [L3-Umbral-Mind-RP-v2.0-8B-5.3bpw-h6-exl2](https://huggingface.co/riveRiPH/L3-Umbral-Mind-RP-v2.0-8B-5.3bpw-h6-exl2) by riveRiPH
***
## Merge Method
This model was merged using several Task Arithmetic merges and then tied together with a Model Stock merge, followed by another Task Arithmetic merge with a model containing psychology data.
### Models Merged
The following models were included in the merge:
* [Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B)
* [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
* [Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B](https://huggingface.co/Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B)
* [Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2](https://huggingface.co/Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2)
* [tannedbum/L3-Nymeria-8B](https://huggingface.co/tannedbum/L3-Nymeria-8B)
* [migtissera/Llama-3-8B-Synthia-v3.5](https://huggingface.co/migtissera/Llama-3-8B-Synthia-v3.5)
* [Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B)
* [tannedbum/L3-Nymeria-Maid-8B](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B)
* [Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B](https://huggingface.co/Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B)
* [aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K)
* [Nitral-AI/Hathor_Stable-v0.2-L3-8B](https://huggingface.co/Nitral-AI/Hathor_Stable-v0.2-L3-8B)
* [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
***
## Evaluation Results
### [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Casual-Autopsy__L3-Umbral-Mind-RP-v2.0-8B)
**Explaination for AI RP newbies:** IFEval is the most important evaluation for RP AIs as it determines how well it can follow OOC, Lorebooks, and most importantly character cards.
The rest don't matter. At least not nearly as much as IFEval.
| Metric |Value|
|-------------------|----:|
|Avg. |25.76|
|IFEval (0-Shot) |71.23|
|BBH (3-Shot) |32.49|
|MATH Lvl 5 (4-Shot)|10.12|
|GPQA (0-shot) | 4.92|
|MuSR (0-shot) | 5.55|
|MMLU-PRO (5-shot) |30.26|
### [UGI Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard)
Information about the metrics can be found at the bottom of the [UGI Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard) in the respective tabs.
|Metric(UGI-Leaderboard) | Value | Value | Metric(Writing Style)|
|:------------------------|:-----:|:-----:|----------------------:|
|UGI(Avg.) |31.82 |0.107 |RegV1 |
|W/10 |5.83 |0.096 |RegV2 |
|Unruly |43.3 |0.05 |MyScore |
|Internet |20 |9.12 |ASSS |
|Stats |23.6 |0 |SMOG |
|Writing |33.8 |1.47 |Yule |
|PolContro |38.3 | | |
***
## Secret Sauce
The following YAML configurations were used to produce this model:
### Umbral-1
```yaml
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
- model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
parameters:
density: 0.45
weight: 0.4
- model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
parameters:
density: 0.65
weight: 0.1
merge_method: dare_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
parameters:
int8_mask: true
dtype: bfloat16
```
### Umbral-2
```yaml
models:
- model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
- model: tannedbum/L3-Nymeria-8B
parameters:
density: 0.45
weight: 0.25
- model: migtissera/Llama-3-8B-Synthia-v3.5
parameters:
density: 0.65
weight: 0.25
merge_method: dare_ties
base_model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
parameters:
int8_mask: true
dtype: bfloat16
```
### Umbral-3
```yaml
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
- model: tannedbum/L3-Nymeria-Maid-8B
parameters:
density: 0.4
weight: 0.3
- model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
parameters:
density: 0.6
weight: 0.2
merge_method: dare_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
parameters:
int8_mask: true
dtype: bfloat16
```
### Mopey-Omelette
```yaml
models:
- model: Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B
- model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
parameters:
weight: 0.15
merge_method: task_arithmetic
base_model: Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B
dtype: bfloat16
```
### Umbral-Mind-1
```yaml
models:
- model: Casual-Autopsy/Umbral-1
- model: Casual-Autopsy/Umbral-3
merge_method: slerp
base_model: Casual-Autopsy/Umbral-1
parameters:
t:
- value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
embed_slerp: true
dtype: bfloat16
```
### Umbral-Mind-2
```yaml
models:
- model: Casual-Autopsy/Umbral-Mind-1
- model: Casual-Autopsy/Umbral-2
merge_method: slerp
base_model: Casual-Autopsy/Umbral-Mind-1
parameters:
t:
- value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
embed_slerp: true
dtype: bfloat16
```
### Umbral-Mind-3
```yaml
models:
- model: Casual-Autopsy/Umbral-Mind-2
- model: Casual-Autopsy/Mopey-Omelette
merge_method: slerp
base_model: Casual-Autopsy/Umbral-Mind-2
parameters:
t:
- value: [0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2]
embed_slerp: true
dtype: bfloat16
```
### L3-Umbral-Mind-RP-v2.0-8B
```yaml
models:
- model: Casual-Autopsy/Umbral-Mind-3
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
parameters:
weight: 0.04
- model: aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
parameters:
weight: 0.02
- model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
parameters:
weight: 0.02
- model: Sao10K/L3-8B-Stheno-v3.1
parameters:
weight: 0.01
merge_method: task_arithmetic
base_model: Casual-Autopsy/Umbral-Mind-3
dtype: bfloat16
```