Initial release
Browse files- .gitattributes +2 -0
- README.md +58 -0
- kunoichi-lemon-royale-7B.Q8_0.gguf +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
*.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
*.GGUF filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,3 +1,61 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: cc-by-nc-4.0
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
base_model:
|
3 |
+
- core-3/kuno-royale-v2-7b
|
4 |
+
- SanjiWatsuki/Kunoichi-DPO-v2-7B
|
5 |
+
- SanjiWatsuki/Kunoichi-7B
|
6 |
+
- KatyTheCutie/LemonadeRP-4.5.3
|
7 |
+
library_name: transformers
|
8 |
+
tags:
|
9 |
+
- mergekit
|
10 |
+
- merge
|
11 |
license: cc-by-nc-4.0
|
12 |
+
|
13 |
---
|
14 |
+
# kunoichi-lemon-royale-7B
|
15 |
+
|
16 |
+
Lightly tested with both Alpaca and ChatML prompts. Works with temperature 1.0 and minP 0.01, but feel free to vary it up. TEsted to 8K context.
|
17 |
+
|
18 |
+
This model has a tendency to lean into revealing character interiority when generating narrative, which some people might find interesting. I found the model good with not only following the character card but also taking strong hints from the first message.
|
19 |
+
|
20 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
21 |
+
|
22 |
+
## Merge Details
|
23 |
+
### Merge Method
|
24 |
+
|
25 |
+
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) as a base. Each of the models had strengths I liked to varying degrees, leading to weights and densities being adjusted in aesthetic proportion.
|
26 |
+
|
27 |
+
### Models Merged
|
28 |
+
|
29 |
+
The following models were included in the merge:
|
30 |
+
* [core-3/kuno-royale-v2-7b](https://huggingface.co/core-3/kuno-royale-v2-7b)
|
31 |
+
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
|
32 |
+
* [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
|
33 |
+
|
34 |
+
### Configuration
|
35 |
+
|
36 |
+
The following YAML configuration was used to produce this model:
|
37 |
+
|
38 |
+
```yaml
|
39 |
+
models:
|
40 |
+
- model: SanjiWatsuki/Kunoichi-7B
|
41 |
+
# no parameters necessary for base model
|
42 |
+
- model: KatyTheCutie/LemonadeRP-4.5.3
|
43 |
+
parameters:
|
44 |
+
weight: 0.3
|
45 |
+
density: 0.4
|
46 |
+
- model: core-3/kuno-royale-v2-7b
|
47 |
+
parameters:
|
48 |
+
weight: 0.3
|
49 |
+
density: 0.4
|
50 |
+
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
|
51 |
+
parameters:
|
52 |
+
weight: 0.4
|
53 |
+
density: 0.8
|
54 |
+
merge_method: dare_ties
|
55 |
+
base_model: SanjiWatsuki/Kunoichi-7B
|
56 |
+
parameters:
|
57 |
+
int8_mask: true
|
58 |
+
normalize: true
|
59 |
+
dtype: bfloat16
|
60 |
+
|
61 |
+
```
|
kunoichi-lemon-royale-7B.Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e03a8cac3b436ea961b292ac61ff8f75005a2546155cbf61f4d02c452bf1169e
|
3 |
+
size 7695857376
|