File size: 4,057 Bytes
53ae841
 
 
c40ea05
848bacc
13362b2
848bacc
 
 
13362b2
 
ffe0fdc
24bc55e
f5be71e
0a8b024
f5be71e
6f9fb40
f5be71e
 
0a8b024
f7a9535
 
 
 
cda68e1
f7a9535
 
 
 
cda68e1
f7a9535
 
 
cda68e1
f7a9535
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cda68e1
f7a9535
 
 
 
ffe0fdc
 
 
 
 
c926831
13362b2
 
 
 
 
 
 
 
c926831
13362b2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44596be
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
---
license: apache-2.0
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63cf23cffbd0cc580bc65c73/QDvxvuS3M7oHv7JI5d1ke.png)

Custom Model "Dolphin2Star1" Merged by Noodlz.
12.5B linear merged from the uncensored mistral 7B v0.2 as the base, with the fine tunes of StarlingLM 7B Beta that's originally mistral 7B v0.1

have fun =)



[EDIT] - preset wise it seems like it likes the "ChatML" format.
[EDIT 2] - Usage Notes - model is sorta picky with the batch size and prompt preset/template. (maybe because merge of ChatML and OpenChat models)

My current recommended setting & findings
- Using LM Studio - use the default preset. GPU acceleration to max. prompt eval size to 1024, context length to 32768. this yields me decent, coherant results. ChatML works too but occasionall spits up odd texts after a couple of turns.
- Using Oobabooga (Windows PC) - runs well using run-in-4bit along with use_flash_attention_2. default presets and everything works just fine.
- Using OobaBooga (Mac) - [investigating]




## Instructions Template:
```
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{{ '<s>' }}{% for message in messages %}{{'<|im_start|>' + message['role'] + '
' + message['content'] + '<|im_end|>' + '
'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
' }}{% endif %}
```


## Chat Template:
```
{%- for message in messages %}
    {%- if message['role'] == 'system' -%}
        {%- if message['content'] -%}
            {{- message['content'] + '\n\n' -}}
        {%- endif -%}
        {%- if user_bio -%}
            {{- user_bio + '\n\n' -}}
        {%- endif -%}
    {%- else -%}
        {%- if message['role'] == 'user' -%}
            {{- name1 + ': ' + message['content'] + '\n'-}}
        {%- else -%}
            {{- name2 + ': ' + message['content'] + '\n' -}}
        {%- endif -%}
    {%- endif -%}
{%- endfor -%}
```




---
license: apache-2.0
---


---
base_model:
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
- NexusFlow/Starling-LM-7B-beta
library_name: transformers
tags:
- mergekit
- merge

---
# output_folder

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.

### Models Merged

The following models were included in the merge:
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
* [NexusFlow/Starling-LM-7B-beta](https://huggingface.co/NexusFlow/Starling-LM-7B-beta)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
merge_method: linear
parameters:
  weight: 1.0
slices:
  - sources:
      - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
        layer_range: [0,1]
      - model: NexusFlow/Starling-LM-7B-beta
        layer_range: [0,1]
        parameters: 
          weight: 0
  - sources:
      - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
        layer_range: [1,8]        
  - sources:
      - model: NexusFlow/Starling-LM-7B-beta
        layer_range: [4,12]
  - sources:
      - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
        layer_range: [8,16]        
  - sources:
      - model: NexusFlow/Starling-LM-7B-beta
        layer_range: [12,20]  
  - sources:
      - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
        layer_range: [16,24]        
  - sources:
      - model: NexusFlow/Starling-LM-7B-beta
        layer_range: [20,28]
  - sources:
      - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
        layer_range: [24,31]        
  - sources:
      - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
        layer_range: [31,32]
      - model: NexusFlow/Starling-LM-7B-beta
        layer_range: [31,32]
        parameters: 
          weight: 0          
dtype: float16
tokenizer_source: model:cognitivecomputations/dolphin-2.8-mistral-7b-v02
```