SnowLotus-v2-10.7B / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
190683d verified
|
raw
history blame
6.97 kB
---
license: apache-2.0
tags:
- Roleplay
- Solar
- Mistral
- Text Generation
- merge
model-index:
- name: SnowLotus-v2-10.7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 64.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BlueNipples/SnowLotus-v2-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.28
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BlueNipples/SnowLotus-v2-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.1
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BlueNipples/SnowLotus-v2-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 45.54
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BlueNipples/SnowLotus-v2-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BlueNipples/SnowLotus-v2-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 48.75
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BlueNipples/SnowLotus-v2-10.7B
name: Open LLM Leaderboard
---
![SnowLotus Logo](https://cdn-uploads.huggingface.co/production/uploads/64bb1109aaccfd28b023bcec/gTQtPK46laLIFg0RTAv73.png)
### Premise
So this is a basic slerp merge between a smart model and a good prose model. Prose and smarts. What we all want in an uncensored RP model right? I feel like Solar has untapped potential, in any case.
Sao10K's Frostwind finetune is a key component of the mixture, its smarts are impressive. NyxKrage's Frostmaid experiment, which merges Frostwind with a frankenmerge of Noromaid and a mystery medical model, delivers quite impressive prose. His model creatively incorporates long-range context and instructions too, despite being slightly incoherent due to the fraken merging.
So those are the main ingredients. Thanks to Nyx for sorting out the pytorch files btw.
GGUF (Small selection of Imatrix and regular k-quants): https://huggingface.co/BlueNipples/DaringLotus-SnowLotus-10.7b-IQ-GGUF
EXL2s: https://huggingface.co/zaq-hack/SnowLotus-v2-10.7B-bpw500-h6-exl2
https://huggingface.co/lucyknada/SnowLotus-v2-10.7B-3bpw-exl2
### Recipe
So, the recipe. I added solardoc by Nyx to frostwind at a 0.15 weight, and the gradient SLERP'd Frostwind (+solardoc) into Frostmaid with these params:
- filter: self_attn
value: [0.9, 0.4, 0.1, 0, 0]
- filter: mlp
value: [0.05, 0.95]
- value: 0.45
### Format Notes
Solar is desgined for 4k context, but Nyx reports that his merge works to 8k. Given this has a slerp gradient back into that, I'm not sure which applies here. Alpaca instruct formatting.
### Tentative Dozen or So Test Conclusion
This model seems to have better prose, less GPT-ish language and no degredation in coherency from the last version whilst retaining coherency from FrostWind (plus medical lora). I'm very pleased with this now, it's exactly what I wanted, basically Nyx's Frostmaid but smarter.
Cheers to all the finetuners, mergers and developers without which open source models wouldn't be half of what they are.
Resources used:
https://huggingface.co/NyxKrage/FrostMaid-10.7B-TESTING-pt
https://huggingface.co/Sao10K/Frostwind-10.7B-v1
https://huggingface.co/NyxKrage/Solar-Doc-10.7B-Lora
https://github.com/cg123/mergekit/tree/main
### Ayumi Index
http://ayumi.m8geil.de/erp4_chatlogs/?S=rma_0#!/index
In the Ayumi ERPv4 Chat Log Index, SnowLotus scores a 94.10 in Flesch which means it produces more complex sentences than Daring (quite complex), DaringLotus scores higher in Var and Ad[jv], which means it makes heavier use of adjectives and adverbs (is more descriptive). Noteably Daring is in the top 8 for adjectives in a sentence, highest in it's weight class if you discount the chinese model, and in general both models did very well on this metric (SnowLotus ranks higher here than anything above it in IQ4), showcasing their descriptive ability.
SnowLotus beats DaringLotus on IQ4 with a score of 70.94, only bet by SOLAR Instruct and Fimbulvetr in it's weight class (altho also noteably Kunoichi 7b by a slim margin), DaringLotus is a bit lower at 65.37 - not as smart.
Interestingly the benchmarking here showed repetition for both models (which I haven't seen), but more with SnowLotus - so it's possible Daring repeats less than SnowLotus? These roughly confirm my impressions of the differences, altho potentially reveal some new details too. I've had a great experience RPing with these models, and seen no repetition myself, but be sure to use MinP or DynaTemp rather than the older samplers and be prepared to regen anything they get stuck on!
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BlueNipples__SnowLotus-v2-10.7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |65.09|
|AI2 Reasoning Challenge (25-Shot)|64.76|
|HellaSwag (10-Shot) |85.28|
|MMLU (5-Shot) |64.10|
|TruthfulQA (0-shot) |45.54|
|Winogrande (5-shot) |82.08|
|GSM8k (5-shot) |48.75|