Heithem777 commited on
Commit
e37ad9b
1 Parent(s): ff57766

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +21 -20
README.md CHANGED
@@ -3,37 +3,38 @@ tags:
3
  - merge
4
  - mergekit
5
  - lazymergekit
6
- - OpenPipe/mistral-ft-optimized-1218
7
- - mlabonne/NeuralHermes-2.5-Mistral-7B
8
  base_model:
9
- - OpenPipe/mistral-ft-optimized-1218
10
- - mlabonne/NeuralHermes-2.5-Mistral-7B
11
  ---
12
 
13
  # NeuralPipe-7B-slerp
14
 
15
  NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
- * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
17
- * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
18
 
19
  ## 🧩 Configuration
20
 
21
  ```yaml
22
- slices:
23
- - sources:
24
- - model: OpenPipe/mistral-ft-optimized-1218
25
- layer_range: [0, 32]
26
- - model: mlabonne/NeuralHermes-2.5-Mistral-7B
27
- layer_range: [0, 32]
28
- merge_method: slerp
29
- base_model: OpenPipe/mistral-ft-optimized-1218
 
 
 
 
 
 
30
  parameters:
31
- t:
32
- - filter: self_attn
33
- value: [0, 0.5, 0.3, 0.7, 1]
34
- - filter: mlp
35
- value: [1, 0.5, 0.7, 0.3, 0]
36
- - value: 0.5
37
  dtype: bfloat16
38
  ```
39
 
 
3
  - merge
4
  - mergekit
5
  - lazymergekit
6
+ - samir-fama/SamirGPT-v1
7
+ - abacusai/Slerp-CM-mist-dpo
8
  base_model:
9
+ - samir-fama/SamirGPT-v1
10
+ - abacusai/Slerp-CM-mist-dpo
11
  ---
12
 
13
  # NeuralPipe-7B-slerp
14
 
15
  NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
+ * [samir-fama/SamirGPT-v1](https://huggingface.co/samir-fama/SamirGPT-v1)
17
+ * [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo)
18
 
19
  ## 🧩 Configuration
20
 
21
  ```yaml
22
+ models:
23
+ - model: mistralai/Mistral-7B-v0.1
24
+ # No parameters necessary for base model
25
+ - model: samir-fama/SamirGPT-v1
26
+ parameters:
27
+ density: 0.53
28
+ weight: 0.4
29
+ - model: abacusai/Slerp-CM-mist-dpo
30
+ parameters:
31
+ density: 0.53
32
+ weight: 0.3
33
+
34
+ merge_method: dare_ties
35
+ base_model: mistralai/Mistral-7B-v0.1
36
  parameters:
37
+ int8_mask: true
 
 
 
 
 
38
  dtype: bfloat16
39
  ```
40