redrix commited on
Commit
d67115a
1 Parent(s): cf6daa0
Files changed (1) hide show
  1. README.md +63 -62
README.md CHANGED
@@ -1,62 +1,63 @@
1
- ---
2
- base_model:
3
- - inflatebot/MN-12B-Mag-Mell-R1
4
- - TheDrummer/UnslopNemo-12B-v4.1
5
- - ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
6
- - DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
7
- library_name: transformers
8
- tags:
9
- - mergekit
10
- - merge
11
-
12
- ---
13
- # AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS
14
-
15
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
16
-
17
- ## Merge Details
18
- ### Merge Method
19
-
20
- This model was merged using the della_linear merge method using [TheDrummer/UnslopNemo-12B-v4.1](https://huggingface.co/TheDrummer/UnslopNemo-12B-v4.1) as a base.
21
-
22
- ### Models Merged
23
-
24
- The following models were included in the merge:
25
- * [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1)
26
- * [ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2)
27
- * [DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS](https://huggingface.co/DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS)
28
-
29
- ### Configuration
30
-
31
- The following YAML configuration was used to produce this model:
32
-
33
- ```yaml
34
- models:
35
- - model: TheDrummer/UnslopNemo-12B-v4.1
36
- parameters:
37
- weight: 0.25
38
- density: 0.6
39
- - model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
40
- parameters:
41
- weight: 0.25
42
- density: 0.6
43
- - model: DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
44
- parameters:
45
- weight: 0.2
46
- density: 0.4
47
- - model: inflatebot/MN-12B-Mag-Mell-R1
48
- parameters:
49
- weight: 0.30
50
- density: 0.7
51
- base_model: TheDrummer/UnslopNemo-12B-v4.1
52
- merge_method: della_linear
53
- dtype: bfloat16
54
- chat_template: "chatml"
55
- tokenizer_source: union
56
- parameters:
57
- normalize: false
58
- int8_mask: true
59
- epsilon: 0.05
60
- lambda: 1
61
-
62
- ```
 
 
1
+ ---
2
+ base_model:
3
+ - inflatebot/MN-12B-Mag-Mell-R1
4
+ - TheDrummer/UnslopNemo-12B-v4.1
5
+ - ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
6
+ - DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
7
+ library_name: transformers
8
+ tags:
9
+ - mergekit
10
+ - merge
11
+
12
+ ---
13
+ # <span style="color:yellow">Note: Proper README will get added soon.</span>
14
+ # AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS
15
+
16
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
17
+
18
+ ## Merge Details
19
+ ### Merge Method
20
+
21
+ This model was merged using the della_linear merge method using [TheDrummer/UnslopNemo-12B-v4.1](https://huggingface.co/TheDrummer/UnslopNemo-12B-v4.1) as a base.
22
+
23
+ ### Models Merged
24
+
25
+ The following models were included in the merge:
26
+ * [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1)
27
+ * [ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2)
28
+ * [DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS](https://huggingface.co/DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS)
29
+
30
+ ### Configuration
31
+
32
+ The following YAML configuration was used to produce this model:
33
+
34
+ ```yaml
35
+ models:
36
+ - model: TheDrummer/UnslopNemo-12B-v4.1
37
+ parameters:
38
+ weight: 0.25
39
+ density: 0.6
40
+ - model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
41
+ parameters:
42
+ weight: 0.25
43
+ density: 0.6
44
+ - model: DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
45
+ parameters:
46
+ weight: 0.2
47
+ density: 0.4
48
+ - model: inflatebot/MN-12B-Mag-Mell-R1
49
+ parameters:
50
+ weight: 0.30
51
+ density: 0.7
52
+ base_model: TheDrummer/UnslopNemo-12B-v4.1
53
+ merge_method: della_linear
54
+ dtype: bfloat16
55
+ chat_template: "chatml"
56
+ tokenizer_source: union
57
+ parameters:
58
+ normalize: false
59
+ int8_mask: true
60
+ epsilon: 0.05
61
+ lambda: 1
62
+
63
+ ```