File size: 2,310 Bytes
57fb995
 
1f6aa2f
 
 
 
 
57fb995
1f6aa2f
 
 
 
 
 
 
 
 
 
65ef3e9
23ae6e4
 
80d58f3
1f6aa2f
 
 
 
 
 
 
 
 
 
 
5027a7a
 
 
 
1f6aa2f
 
dcac409
1f6aa2f
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
license: apache-2.0
tags:
- Roleplay
- Solar
- Mistral
- Text Generation
---
![SnowLotus Logo](https://cdn-uploads.huggingface.co/production/uploads/64bb1109aaccfd28b023bcec/gTQtPK46laLIFg0RTAv73.png)

### Premise

So this is a basic slerp merge between a smart model and a good prose model. Prose and smarts. What we all want in an uncensored RP model right? I feel like Solar has untapped potential, in any case. 

Sao10K's Frostwind finetune is a key component of the mixture, its smarts are impressive. NyxKrage's Frostmaid experiment, which merges Frostwind with a frankenmerge of Noromaid and a mystery medical model, delivers quite impressive prose. His model creatively incorporates long-range context and instructions too, despite being slightly incoherent due to the fraken merging. 

So those are the main ingredients. Thanks to Nyx for sorting out the pytorch files btw. 

GGUF (Small selection of Imatrix and regular k-quants): https://huggingface.co/BlueNipples/DaringLotus-SnowLotus-10.7b-IQ-GGUF
EXL2s: https://huggingface.co/zaq-hack/SnowLotus-v2-10.7B-bpw500-h6-exl2
https://huggingface.co/lucyknada/SnowLotus-v2-10.7B-3bpw-exl2

### Recipe

So, the recipe. I added solardoc by Nyx to frostwind at a 0.15 weight, and the gradient SLERP'd Frostwind (+solardoc) into Frostmaid with these params:

- filter: self_attn
      value: [0.9, 0.4, 0.1, 0, 0]
    - filter: mlp
      value: [0.05, 0.95]
    - value: 0.45


### Format Notes

Solar is desgined for 4k context, but Nyx reports that his merge works to 8k. Given this has a slerp gradient back into that, I'm not sure which applies here. Alpaca instruct formatting.

### Tentative Dozen or So Test Conclusion

This model seems to have better prose, less GPT-ish language and no degredation in coherency from the last version whilst retaining coherency from FrostWind (plus medical lora). I'm very pleased with this now, it's exactly what I wanted, basically Nyx's Frostmaid but smarter.

Cheers to all the finetuners, mergers and developers without which open source models wouldn't be half of what they are. 

Resources used:

https://huggingface.co/NyxKrage/FrostMaid-10.7B-TESTING-pt

https://huggingface.co/Sao10K/Frostwind-10.7B-v1

https://huggingface.co/NyxKrage/Solar-Doc-10.7B-Lora

https://github.com/cg123/mergekit/tree/main