mav23 commited on
Commit
1ddf184
1 Parent(s): c847a3e

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +192 -0
  3. stardust-12b-v2.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ stardust-12b-v2.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - chat
7
+ - mistral
8
+ - roleplay
9
+ - creative-writing
10
+ base_model:
11
+ - nbeerbower/mistral-nemo-bophades-12B
12
+ - anthracite-org/magnum-v2-12b
13
+ - Sao10K/MN-12B-Lyra-v3
14
+ - Gryphe/Pantheon-RP-1.6-12b-Nemo
15
+ pipeline_tag: text-generation
16
+ model-index:
17
+ - name: StarDust-12b-v2
18
+ results:
19
+ - task:
20
+ type: text-generation
21
+ name: Text Generation
22
+ dataset:
23
+ name: IFEval (0-Shot)
24
+ type: HuggingFaceH4/ifeval
25
+ args:
26
+ num_few_shot: 0
27
+ metrics:
28
+ - type: inst_level_strict_acc and prompt_level_strict_acc
29
+ value: 56.29
30
+ name: strict accuracy
31
+ source:
32
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Luni/StarDust-12b-v2
33
+ name: Open LLM Leaderboard
34
+ - task:
35
+ type: text-generation
36
+ name: Text Generation
37
+ dataset:
38
+ name: BBH (3-Shot)
39
+ type: BBH
40
+ args:
41
+ num_few_shot: 3
42
+ metrics:
43
+ - type: acc_norm
44
+ value: 34.95
45
+ name: normalized accuracy
46
+ source:
47
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Luni/StarDust-12b-v2
48
+ name: Open LLM Leaderboard
49
+ - task:
50
+ type: text-generation
51
+ name: Text Generation
52
+ dataset:
53
+ name: MATH Lvl 5 (4-Shot)
54
+ type: hendrycks/competition_math
55
+ args:
56
+ num_few_shot: 4
57
+ metrics:
58
+ - type: exact_match
59
+ value: 5.97
60
+ name: exact match
61
+ source:
62
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Luni/StarDust-12b-v2
63
+ name: Open LLM Leaderboard
64
+ - task:
65
+ type: text-generation
66
+ name: Text Generation
67
+ dataset:
68
+ name: GPQA (0-shot)
69
+ type: Idavidrein/gpqa
70
+ args:
71
+ num_few_shot: 0
72
+ metrics:
73
+ - type: acc_norm
74
+ value: 5.82
75
+ name: acc_norm
76
+ source:
77
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Luni/StarDust-12b-v2
78
+ name: Open LLM Leaderboard
79
+ - task:
80
+ type: text-generation
81
+ name: Text Generation
82
+ dataset:
83
+ name: MuSR (0-shot)
84
+ type: TAUR-Lab/MuSR
85
+ args:
86
+ num_few_shot: 0
87
+ metrics:
88
+ - type: acc_norm
89
+ value: 14.26
90
+ name: acc_norm
91
+ source:
92
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Luni/StarDust-12b-v2
93
+ name: Open LLM Leaderboard
94
+ - task:
95
+ type: text-generation
96
+ name: Text Generation
97
+ dataset:
98
+ name: MMLU-PRO (5-shot)
99
+ type: TIGER-Lab/MMLU-Pro
100
+ config: main
101
+ split: test
102
+ args:
103
+ num_few_shot: 5
104
+ metrics:
105
+ - type: acc
106
+ value: 27.1
107
+ name: accuracy
108
+ source:
109
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Luni/StarDust-12b-v2
110
+ name: Open LLM Leaderboard
111
+ library_name: transformers
112
+ ---
113
+
114
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6303fa71fc783bfc7443e7ae/c3ddWBoz-lINEykUDCoXy.png)
115
+
116
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6303fa71fc783bfc7443e7ae/hOpgDxJS2sDO7HzuC9e18.png)
117
+
118
+
119
+ # StarDust-12b-v2
120
+
121
+ ## Quants
122
+
123
+ - GGUF: [mradermacher/StarDust-12b-v2-GGUF](https://huggingface.co/mradermacher/StarDust-12b-v2-GGUF)
124
+ - weighted/imatrix GGUF: [mradermacher/StarDust-12b-v2-i1-GGUF](https://huggingface.co/mradermacher/StarDust-12b-v2-i1-GGUF/tree/main)
125
+ - exl2: [lucyknada/Luni_StarDust-12b-v2-exl2](https://huggingface.co/lucyknada/Luni_StarDust-12b-v2-exl2)
126
+
127
+ ## Description | Usecase
128
+
129
+ - The result of this merge is in my opinion a more vibrant and less generic sonnet inspired prose, it's able to be gentle and harsh where asked.
130
+ - The v2 uses the non-kto magnum which tends to have less "claudeism" (making the story feel rather repetitive)
131
+ - Note on Non-Kto: There is a very big gap between people preferring and disliking the KTO. To make things easier, you can still use [Luni/StarDust-12b-v1](https://huggingface.co/Luni/StarDust-12b-v1) which has the KTO version.
132
+ - In early testing users have reported a much better experience in longer roleplays and a abillity to add a creative touch to the stable experiencve.
133
+
134
+ Just like with v1:
135
+ - This model is intended to be used as a Role-playing model.
136
+ - Its direct conversational output is... I can't even say it's luck, it's just not made for it.
137
+ - Extension to Conversational output: The Model is designed for roleplay, direct instructing or general purpose is NOT recommended.
138
+
139
+ ## Initial Feedback
140
+
141
+ - Initial feedback has proven the model to be a solid "go-to" choice for creative storywriting
142
+ - The prose has been certified as "amazing" with many making it their default model.
143
+
144
+
145
+ ## Prompting
146
+
147
+ ### ChatML has proven to be the BEST choice.
148
+
149
+ Both Mistral and ChatML should work though I had better results with ChatML:
150
+ ChatML Example:
151
+ ```py
152
+ """<|im_start|>user
153
+ Hi there!<|im_end|>
154
+ <|im_start|>assistant
155
+ Nice to meet you!<|im_end|>
156
+ <|im_start|>user
157
+ Can I ask a question?<|im_end|>
158
+ <|im_start|>assistant
159
+ """
160
+ ```
161
+
162
+
163
+
164
+ ## Merge Details
165
+ ### Merge Method
166
+
167
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Sao10K/MN-12B-Lyra-v3](https://huggingface.co/Sao10K/MN-12B-Lyra-v3) as a base.
168
+
169
+ ### Models Merged
170
+
171
+ The following models were included in the merge:
172
+ * [nbeerbower/mistral-nemo-bophades-12B](https://huggingface.co/nbeerbower/mistral-nemo-bophades-12B)
173
+ * [anthracite-org/magnum-v2-12b](https://huggingface.co/anthracite-org/magnum-v2-12b)
174
+ * [Gryphe/Pantheon-RP-1.6-12b-Nemo](https://huggingface.co/Gryphe/Pantheon-RP-1.6-12b-Nemo)
175
+ * [Sao10K/MN-12B-Lyra-v3](https://huggingface.co/Sao10K/MN-12B-Lyra-v3)
176
+
177
+ ### Special Thanks
178
+
179
+ Special thanks to the SillyTilly and myself for helping me find the energy to finish this.
180
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
181
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Luni__StarDust-12b-v2)
182
+
183
+ | Metric |Value|
184
+ |-------------------|----:|
185
+ |Avg. |24.06|
186
+ |IFEval (0-Shot) |56.29|
187
+ |BBH (3-Shot) |34.95|
188
+ |MATH Lvl 5 (4-Shot)| 5.97|
189
+ |GPQA (0-shot) | 5.82|
190
+ |MuSR (0-shot) |14.26|
191
+ |MMLU-PRO (5-shot) |27.10|
192
+
stardust-12b-v2.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1dbb347453ecb95f82ff2722d54220ad15690b63faa9a8801bb3d328429e0c5a
3
+ size 7071700960