icefog72 commited on
Commit
0f8f73f
1 Parent(s): 4f2b780

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -48
README.md CHANGED
@@ -1,48 +1,52 @@
1
- ---
2
- base_model: []
3
- library_name: transformers
4
- tags:
5
- - mergekit
6
- - merge
7
-
8
- ---
9
- # IceSakeV8RP-7b
10
-
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
-
13
- ## Merge Details
14
- ### Merge Method
15
-
16
- This model was merged using the SLERP merge method.
17
-
18
- ### Models Merged
19
-
20
- The following models were included in the merge:
21
- * C:\Users\roman\FModels\IceLemonTea-IceCoffeRP-7b
22
- * C:\Users\roman\FModels\IceSakeV7RP-7b
23
-
24
- ### Configuration
25
-
26
- The following YAML configuration was used to produce this model:
27
-
28
- ```yaml
29
- slices:
30
- - sources:
31
- - model: C:\Users\roman\FModels\IceLemonTea-IceCoffeRP-7b
32
- layer_range: [0, 32]
33
- - model: C:\Users\roman\FModels\IceSakeV7RP-7b
34
- layer_range: [0, 32]
35
-
36
- merge_method: slerp
37
- base_model: C:\Users\roman\FModels\IceLemonTea-IceCoffeRP-7b
38
- parameters:
39
- t:
40
- - filter: self_attn
41
- value: [0, 0.5, 0.3, 0.7, 1]
42
- - filter: mlp
43
- value: [1, 0.5, 0.7, 0.3, 0]
44
- - value: 0.5 # fallback for rest of tensors
45
- dtype: float16
46
-
47
-
48
- ```
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+ - alpaca
8
+ - mistral
9
+ - not-for-all-audiences
10
+ - nsfw
11
+
12
+ ---
13
+ # IceSakeV8RP-7b
14
+
15
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
16
+
17
+ ## Merge Details
18
+ ### Merge Method
19
+
20
+ This model was merged using the SLERP merge method.
21
+
22
+ ### Models Merged
23
+
24
+ The following models were included in the merge:
25
+ * IceLemonTea-IceCoffeRP-7b
26
+ * IceSakeV7RP-7b
27
+
28
+ ### Configuration
29
+
30
+ The following YAML configuration was used to produce this model:
31
+
32
+ ```yaml
33
+ slices:
34
+ - sources:
35
+ - model: IceLemonTea-IceCoffeRP-7b
36
+ layer_range: [0, 32]
37
+ - model: IceSakeV7RP-7b
38
+ layer_range: [0, 32]
39
+
40
+ merge_method: slerp
41
+ base_model: IceLemonTea-IceCoffeRP-7b
42
+ parameters:
43
+ t:
44
+ - filter: self_attn
45
+ value: [0, 0.5, 0.3, 0.7, 1]
46
+ - filter: mlp
47
+ value: [1, 0.5, 0.7, 0.3, 0]
48
+ - value: 0.5 # fallback for rest of tensors
49
+ dtype: float16
50
+
51
+
52
+ ```