Apel-sin commited on
Commit
6c10d6b
1 Parent(s): af282c8

add measurement.json

Browse files
Files changed (2) hide show
  1. README.md +91 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - wzhouad/gemma-2-9b-it-WPO-HB
4
+ - UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
5
+ - princeton-nlp/gemma-2-9b-it-SimPO
6
+ library_name: transformers
7
+ tags:
8
+ - mergekit
9
+ - merge
10
+ - merge
11
+ ---
12
+ # Gemma Advanced V2.1
13
+
14
+ This is a merge of the 'smartest' advanced fine-tunes available for Gemma-2-9b-it. It includes WPO, SimPO, and SPPO. The merge was performed via the SOTA 'della' merge method. Merge parameters have been hand-tuned for best results. The Q8_0 quant is highly recommended until better quants come along.
15
+
16
+ ## Notes and observations:
17
+ * The extreme temperature sensitivity from V1 has been fixed, no longer needs to be run at lower temperatures
18
+ * Has a somewhat different writing style than any of the parent models
19
+ * Great instruction following
20
+ * Tracks plot details well and has good situational understanding
21
+ * Seems to have a good understanding of psychology, emotions and creative writing
22
+ * More 'sane' than base gemma-it, SPPO, or SimPO - not as prone to 'Cruella De Vil' or 'Evil Sorceress' like SPPO or SimPO, when portraying characters
23
+ * Would likely serve as a good base for further merges
24
+ * I'm looking for a job, if you're hiring. I'm a skilled Python developer who brings strong devops skills along with an ever-growing knowledge of machine learning pipelines and models. Message me if you want to talk about what I can bring to your team.
25
+ * Overall, this feels like a very useful and successful merge.
26
+
27
+ ## Quantized GGUFs can be found here:
28
+ * [My quants, Q8_0 tested - jsgreenawalt/gemma-2-9B-it-advanced-v2.1-GGUF](https://huggingface.co/jsgreenawalt/gemma-2-9B-it-advanced-v2.1-GGUF)
29
+ * [iMatrix - mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF)
30
+ * [QuantFactory/gemma-2-9B-it-advanced-v2.1-GGUF](https://huggingface.co/QuantFactory/gemma-2-9B-it-advanced-v2.1-GGUF)
31
+ * [mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF)
32
+
33
+ Thanks to everyone who was kind enough to provide quants!
34
+
35
+ I'll link to other quants as they appear.
36
+
37
+ # sample ollama Modelfile
38
+ ```yaml
39
+ FROM /path/to/file/gemma-2-9B-it-advanced-v2.1-Q8_0.gguf
40
+ PARAMETER stop "<start_of_turn>"
41
+ PARAMETER stop "<end_of_turn>"
42
+ PARAMETER num_ctx 8192
43
+ TEMPLATE """<start_of_turn>user
44
+ {{ if .System }}{{ .System }} {{ end }}{{ .Prompt }}<end_of_turn>
45
+ <start_of_turn>model
46
+ {{ .Response }}<end_of_turn>"""
47
+ ```
48
+
49
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
50
+
51
+ ## Merge Details
52
+ ### Merge Method
53
+
54
+ This model was merged using the della merge method using [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) as a base.
55
+
56
+ ### Models Merged
57
+
58
+ The following models were included in the merge:
59
+ * [wzhouad/gemma-2-9b-it-WPO-HB](https://huggingface.co/wzhouad/gemma-2-9b-it-WPO-HB)
60
+ * [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO)
61
+ * [UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3)
62
+
63
+ ### Configuration
64
+
65
+ The following YAML configuration was used to produce this model:
66
+
67
+ ```yaml
68
+ models:
69
+ - model: google/gemma-2-9b-it
70
+ - model: wzhouad/gemma-2-9b-it-WPO-HB
71
+ parameters:
72
+ density: 0.55
73
+ weight: 0.6
74
+ - model: princeton-nlp/gemma-2-9b-it-SimPO
75
+ parameters:
76
+ density: 0.35
77
+ weight: 0.6
78
+ - model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
79
+ parameters:
80
+ density: 0.25
81
+ weight: 0.4
82
+ merge_method: della
83
+ base_model: google/gemma-2-9b-it
84
+ parameters:
85
+ normalize: true
86
+ int8_mask: true
87
+ lambda: 1.0
88
+ epsilon: 0.1
89
+ dtype: float16
90
+
91
+ ```
measurement.json ADDED
The diff for this file is too large to render. See raw diff