automerger commited on
Commit
5d10c0f
·
verified ·
1 Parent(s): 8a35c9f

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -6,31 +6,37 @@ tags:
6
  - lazymergekit
7
  - automerger
8
  base_model:
 
9
  - automerger/YamshadowExperiment28-7B
10
  ---
11
 
12
  # Inex12Yamshadowexperiment28-7B
13
 
14
  Inex12Yamshadowexperiment28-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
 
15
  * [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B)
16
 
17
  ## 🧩 Configuration
18
 
19
  ```yaml
20
- models:
21
- - model: MSL7/INEX12-7b
22
- # No parameters necessary for base model
23
- - model: automerger/YamshadowExperiment28-7B
24
- parameters:
25
- density: 0.53
26
- weight: 0.6
27
- merge_method: dare_ties
28
  base_model: MSL7/INEX12-7b
29
  parameters:
30
- int8_mask: true
 
 
 
 
 
31
  dtype: bfloat16
32
  random_seed: 0
33
- ```
34
 
35
  ## 💻 Usage
36
 
 
6
  - lazymergekit
7
  - automerger
8
  base_model:
9
+ - MSL7/INEX12-7b
10
  - automerger/YamshadowExperiment28-7B
11
  ---
12
 
13
  # Inex12Yamshadowexperiment28-7B
14
 
15
  Inex12Yamshadowexperiment28-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
16
+ * [MSL7/INEX12-7b](https://huggingface.co/MSL7/INEX12-7b)
17
  * [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B)
18
 
19
  ## 🧩 Configuration
20
 
21
  ```yaml
22
+ slices:
23
+ - sources:
24
+ - model: MSL7/INEX12-7b
25
+ layer_range: [0, 32]
26
+ - model: automerger/YamshadowExperiment28-7B
27
+ layer_range: [0, 32]
28
+ merge_method: slerp
 
29
  base_model: MSL7/INEX12-7b
30
  parameters:
31
+ t:
32
+ - filter: self_attn
33
+ value: [0, 0.5, 0.3, 0.7, 1]
34
+ - filter: mlp
35
+ value: [1, 0.5, 0.7, 0.3, 0]
36
+ - value: 0.5
37
  dtype: bfloat16
38
  random_seed: 0
39
+ ```
40
 
41
  ## 💻 Usage
42
 
mergekit_config.yml CHANGED
@@ -1,14 +1,19 @@
1
 
2
- models:
3
- - model: MSL7/INEX12-7b
4
- # No parameters necessary for base model
5
- - model: automerger/YamshadowExperiment28-7B
6
- parameters:
7
- density: 0.53
8
- weight: 0.6
9
- merge_method: dare_ties
10
  base_model: MSL7/INEX12-7b
11
  parameters:
12
- int8_mask: true
 
 
 
 
 
13
  dtype: bfloat16
14
  random_seed: 0
 
 
1
 
2
+ slices:
3
+ - sources:
4
+ - model: MSL7/INEX12-7b
5
+ layer_range: [0, 32]
6
+ - model: automerger/YamshadowExperiment28-7B
7
+ layer_range: [0, 32]
8
+ merge_method: slerp
 
9
  base_model: MSL7/INEX12-7b
10
  parameters:
11
+ t:
12
+ - filter: self_attn
13
+ value: [0, 0.5, 0.3, 0.7, 1]
14
+ - filter: mlp
15
+ value: [1, 0.5, 0.7, 0.3, 0]
16
+ - value: 0.5
17
  dtype: bfloat16
18
  random_seed: 0
19
+
model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bf7eaca5ee4980961b57a4be7833a41603b79bb5f6662179b24b3bbca574b08b
3
  size 9825524456
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ea50d3d5cf2bbfc16a95f2375c1141b18e432c31c5c339245ecfd6450d9d04f
3
  size 9825524456
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e0b728c0003fa5e05d2765eb60747b0dbc2337d073973a715d772bbbef0788a9
3
  size 4657973592
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09948a2415914dd059d5606609af3719e7d0c376e11487bcd800650bc32a25e1
3
  size 4657973592