Update README.md
Browse files
README.md
CHANGED
@@ -14,28 +14,28 @@ This is a progressive (mostly dare-ties, but also slerp) merge with the intentio
|
|
14 |
|
15 |
It achives in german EQ Bench: Score (v2_de): 62.59 (Parseable: 171.0).
|
16 |
|
17 |
-
It should work sufficiently well with ChatML (all merged models should have seen ChatML at least in DPO stage).
|
18 |
|
19 |
Spaetzle-v69-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
20 |
* [abideen/AlphaMonarch-dora](https://huggingface.co/abideen/AlphaMonarch-dora)
|
21 |
* [cstr/Spaetzle-v68-7b](https://huggingface.co/cstr/Spaetzle-v68-7b)
|
22 |
|
23 |
The merge tree in total involves to following original models:
|
24 |
-
- [abideen/AlphaMonarch-dora]
|
25 |
-
- [mayflowergmbh/Wiedervereinigung-7b-dpo]
|
26 |
-
- [flemmingmiguel/NeuDist-Ro-7B]
|
27 |
-
- [ResplendentAI/Flora_DPO_7B]
|
28 |
-
- [yleo/EmertonMonarch-7B]
|
29 |
-
- [occiglot/occiglot-7b-de-en-instruct]
|
30 |
-
- [OpenPipe/mistral-ft-optimized-1227]
|
31 |
-
- [yleo/EmertonMonarch-7B]
|
32 |
-
- [DiscoResearch/DiscoLM_German_7b_v1]
|
33 |
-
- [LeoLM/leo-mistral-hessianai-7b]
|
34 |
-
- [DRXD1000/Phoenix]
|
35 |
-
- [VAGOsolutions/SauerkrautLM-7b-v1-mistral]
|
36 |
-
- [malteos/hermeo-7b]
|
37 |
-
- [FelixChao/WestSeverus-7B-DPO-v2]
|
38 |
-
- [cognitivecomputations/openchat-3.5-0106-laser]
|
39 |
|
40 |
|
41 |
## 🧩 Configuration
|
|
|
14 |
|
15 |
It achives in german EQ Bench: Score (v2_de): 62.59 (Parseable: 171.0).
|
16 |
|
17 |
+
It should work sufficiently well with ChatML prompt template (for all merged models should have seen ChatML prompts at least in DPO stage).
|
18 |
|
19 |
Spaetzle-v69-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
20 |
* [abideen/AlphaMonarch-dora](https://huggingface.co/abideen/AlphaMonarch-dora)
|
21 |
* [cstr/Spaetzle-v68-7b](https://huggingface.co/cstr/Spaetzle-v68-7b)
|
22 |
|
23 |
The merge tree in total involves to following original models:
|
24 |
+
- [abideen/AlphaMonarch-dora](https://huggingface.co/abideen/AlphaMonarch-dora)
|
25 |
+
- [mayflowergmbh/Wiedervereinigung-7b-dpo](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo)
|
26 |
+
- [flemmingmiguel/NeuDist-Ro-7B](https://huggingface.co/flemmingmiguel/NeuDist-Ro-7B)
|
27 |
+
- [ResplendentAI/Flora_DPO_7B](https://huggingface.co/ResplendentAI/Flora_DPO_7B)
|
28 |
+
- [yleo/EmertonMonarch-7B](https://huggingface.co/yleo/EmertonMonarch-7B)
|
29 |
+
- [occiglot/occiglot-7b-de-en-instruct](https://huggingface.co/occiglot/occiglot-7b-de-en-instruct)
|
30 |
+
- [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227)
|
31 |
+
- [yleo/EmertonMonarch-7B](https://huggingface.co/yleo/EmertonMonarch-7B)
|
32 |
+
- [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1)
|
33 |
+
- [LeoLM/leo-mistral-hessianai-7b](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b)
|
34 |
+
- [DRXD1000/Phoenix](https://huggingface.co/DRXD1000/Phoenix)
|
35 |
+
- [VAGOsolutions/SauerkrautLM-7b-v1-mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral)
|
36 |
+
- [malteos/hermeo-7b](https://huggingface.co/malteos/hermeo-7b)
|
37 |
+
- [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
|
38 |
+
- [cognitivecomputations/openchat-3.5-0106-laser](https://huggingface.co/cognitivecomputations/openchat-3.5-0106-laser)
|
39 |
|
40 |
|
41 |
## 🧩 Configuration
|