mav23 commited on
Commit
4ca6581
1 Parent(s): e6c6a53

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +57 -0
  3. maid-yuzu-v8.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ maid-yuzu-v8.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - smelborp/MixtralOrochi8x7B
4
+ library_name: transformers
5
+ tags:
6
+ - mergekit
7
+ - merge
8
+
9
+ ---
10
+ # maid-yuzu-v8
11
+
12
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
13
+
14
+ v7's approach worked better than I thought, so I tried something even weirder as a test. I don't think a proper model will come out, but I'm curious about the results.
15
+
16
+ ## Merge Details
17
+ ### Merge Method
18
+
19
+ This models were merged using the SLERP method in the following order:
20
+
21
+ maid-yuzu-v8-base: mistralai/Mixtral-8x7B-v0.1 + mistralai/Mixtral-8x7B-Instruct-v0.1 = 0.5
22
+ maid-yuzu-v8-step1: above + jondurbin/bagel-dpo-8x7b-v0.2 = 0.25
23
+ maid-yuzu-v8-step2: above + cognitivecomputations/dolphin-2.7-mixtral-8x7b = 0.25
24
+ maid-yuzu-v8-step3: above + NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss = 0.25
25
+ maid-yuzu-v8-step4: above + ycros/BagelMIsteryTour-v2-8x7B = 0.25
26
+ maid-yuzu-v8: above + smelborp/MixtralOrochi8x7B = 0.25
27
+
28
+ ### Models Merged
29
+
30
+ The following models were included in the merge:
31
+ * [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B)
32
+ * ../maid-yuzu-v8-step4
33
+
34
+ ### Configuration
35
+
36
+ The following YAML configuration was used to produce this model:
37
+
38
+ ```yaml
39
+ base_model:
40
+ model:
41
+ path: ../maid-yuzu-v8-step4
42
+ dtype: bfloat16
43
+ merge_method: slerp
44
+ parameters:
45
+ t:
46
+ - value: 0.25
47
+ slices:
48
+ - sources:
49
+ - layer_range: [0, 32]
50
+ model:
51
+ model:
52
+ path: ../maid-yuzu-v8-step4
53
+ - layer_range: [0, 32]
54
+ model:
55
+ model:
56
+ path: smelborp/MixtralOrochi8x7B
57
+ ```
maid-yuzu-v8.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de538e4217aa3eae7193020101e8bbb303248618ef74833569b91542fac255af
3
+ size 26443589216