InferenceIllusionist
commited on
Commit
•
7712bb0
1
Parent(s):
60a21a0
Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,17 @@ tags:
|
|
8 |
---
|
9 |
# Magic-Dolphin-7b
|
10 |
<img src="https://huggingface.co/InferenceIllusionist/Magic-Dolphin-7b/resolve/main/magic-dolphin.jfif" width="500"/>
|
11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
This was my first experiment with merging models so any feedback is greatly appreciated.
|
14 |
|
|
|
8 |
---
|
9 |
# Magic-Dolphin-7b
|
10 |
<img src="https://huggingface.co/InferenceIllusionist/Magic-Dolphin-7b/resolve/main/magic-dolphin.jfif" width="500"/>
|
11 |
+
|
12 |
+
|
13 |
+
A linear merge of:
|
14 |
+
- [cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)
|
15 |
+
- [Locutusque/Hyperion-1.5-Mistral-7B](https://huggingface.co/Locutusque/Hyperion-1.5-Mistral-7B)
|
16 |
+
- [ibm/merlinite-7b](https://huggingface.co/ibm/merlinite-7b)
|
17 |
+
|
18 |
+
|
19 |
+
These three models showed excellent acumen in technical topics so I wanted to see how they would behave together in a merge. Several different ratios were tested before this release, in the end a higher weighting for merlinite-7b helped smooth out some edges. This model is a test of how LAB tuning is impacted by merges with models leveraging DPO.
|
20 |
+
|
21 |
+
|
22 |
|
23 |
This was my first experiment with merging models so any feedback is greatly appreciated.
|
24 |
|