Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- mistralai/Mixtral-8x7B-v0.1
|
4 |
+
- mistralai/Mixtral-8x7B-v0.1
|
5 |
+
- Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
|
6 |
+
- KoboldAI/Mixtral-8x7B-Holodeck-v1
|
7 |
+
- jondurbin/bagel-dpo-8x7b-v0.2
|
8 |
+
- mistralai/Mixtral-8x7B-Instruct-v0.1
|
9 |
+
tags:
|
10 |
+
- mergekit
|
11 |
+
- merge
|
12 |
+
license: apache-2.0
|
13 |
+
---
|
14 |
+
# DonutHole-8x7B
|
15 |
+
|
16 |
+
_These are GGUF quantized versions of [DonutHole-8x7B](https://huggingface.co/ycros/DonutHole-8x7B)._
|
17 |
+
|
18 |
+
Bagel, Mixtral Instruct, Holodeck, LimaRP.
|
19 |
+
> What mysteries lie in the hole of a donut?
|
20 |
+
|
21 |
+
Good with Alpaca prompt formats, also works with Mistral format. See usage details below.
|
22 |
+
|
23 |
+
|
24 |
+
![image/webp](https://cdn-uploads.huggingface.co/production/uploads/63044fa07373aacccd8a7c53/VILuxGHvEPmDsn0YUX6Gh.webp)
|
25 |
+
|
26 |
+
This is similar to [BagelMIsteryTour](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B), but I've swapped out Sensualize for the new Holodeck.
|
27 |
+
I'm not sure if it's better or not yet, or how it does at higher (8k+) contexts just yet.
|
28 |
+
|
29 |
+
Similar sampler advice applies as for BMT: minP (0.07 - 0.3 to taste) -> temp (either dynatemp 0-4ish, or like a temp of 3-4 with a smoothing factor of around 2.5ish).
|
30 |
+
And yes, that's temp last. It does okay without rep pen up to a point, it doesn't seem to get into a complete jam, but it can start to repeat sentences,
|
31 |
+
so you'll probably need some, perhaps 1.07 at a 1024 range seems okayish.
|
32 |
+
(rep pen sucks, but there are better things coming).
|
33 |
+
|
34 |
+
I've mainly tested with LimaRP style Alpaca prompts (instruction/input/response), and briefly with Mistral's own format.
|
35 |
+
|
36 |
+
**Full credit to all the model and dataset authors, I am but a derp with compute and a yaml file.**
|
37 |
+
|
38 |
+
---
|
39 |
+
|
40 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
41 |
+
|
42 |
+
## Merge Details
|
43 |
+
### Merge Method
|
44 |
+
|
45 |
+
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) as a base.
|
46 |
+
|
47 |
+
### Models Merged
|
48 |
+
|
49 |
+
The following models were included in the merge:
|
50 |
+
* [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) + [Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
|
51 |
+
* [KoboldAI/Mixtral-8x7B-Holodeck-v1](https://huggingface.co/KoboldAI/Mixtral-8x7B-Holodeck-v1)
|
52 |
+
* [jondurbin/bagel-dpo-8x7b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2)
|
53 |
+
* [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
|
54 |
+
|
55 |
+
### Configuration
|
56 |
+
|
57 |
+
The following YAML configuration was used to produce this model:
|
58 |
+
|
59 |
+
```yaml
|
60 |
+
base_model: mistralai/Mixtral-8x7B-v0.1
|
61 |
+
models:
|
62 |
+
- model: mistralai/Mixtral-8x7B-v0.1+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
|
63 |
+
parameters:
|
64 |
+
density: 0.5
|
65 |
+
weight: 0.2
|
66 |
+
- model: KoboldAI/Mixtral-8x7B-Holodeck-v1
|
67 |
+
parameters:
|
68 |
+
density: 0.5
|
69 |
+
weight: 0.2
|
70 |
+
- model: mistralai/Mixtral-8x7B-Instruct-v0.1
|
71 |
+
parameters:
|
72 |
+
density: 0.6
|
73 |
+
weight: 1.0
|
74 |
+
- model: jondurbin/bagel-dpo-8x7b-v0.2
|
75 |
+
parameters:
|
76 |
+
density: 0.6
|
77 |
+
weight: 0.5
|
78 |
+
merge_method: dare_ties
|
79 |
+
dtype: bfloat16
|
80 |
+
|
81 |
+
|
82 |
+
```
|