File size: 2,550 Bytes
55ddaef
 
 
 
 
 
 
 
419f078
 
 
 
ebd9992
 
 
 
 
 
 
419f078
 
 
 
 
 
 
 
 
55ddaef
 
 
419f078
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c7fbe91
419f078
 
ebd9992
55ddaef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
419f078
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
base_model:
- allura-org/TQ2.5-14B-Neon-v1
- allura-org/TQ2.5-14B-Sugarquill-v1
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
---

<img src="aletheia.png">
<small>Image by CalamitousFelicitousness</small>

---


# Qwen2.5-14B Aletheia v1

RP/Story hybrid model, merge of Sugarquill and Neon. As with Gemma version, I wanted to preserve Sugarquill's creative spark, while making the model more steerable for RP. It proved to be more difficult this time, but I quite like the result regardless, even if the model is still somewhat temperamental.

Should work for both RP and storywriting, either on raw completion or with back-and-forth cowriting in chat mode. Seems to be quite sensitive to low depth instructions and samplers.

Thanks to Toasty and Fizz for testing and giving feedback

Model was created by Auri.

---

**Notes about merging**

It took me 20 something attempts to make this model. TIES didn't work at all, producing broken or nearly broken results every time. SLERP worked much better and after just 3 attempts I got something I like.
Sugarquill was really prone to overtaking the merge, so I had to reduce it's part a lot, and still model has a lot of influence from it.

**Format**

Model responds to ChatML instruct formatting, exactly like it's base model.

```
<|im_start|>system
{system message}<|im_end|>
<|im_start|>user
{user message}<|im_end|>
<|im_start|>assistant
{response}<|im_end|>
```

**Recommended Samplers**

This one is a bit of a special snowflake, with special tastes. Those seem to work pretty well:

```
Temperature - 0.8
Top-A - 0.3
TFS - 0.75
DRY - Multiplier 0.8 - Base 1.75 - Allowed length 3 - Range 1024
```

As a starting point, you can try this [ST Master Import](https://huggingface.co/allura-org/TQ2.5-14B-Aletheia-v1/blob/main/TQ-Aletheia.json)

## Merge Details
### Merge Method

This model was merged using the SLERP merge method.

### Models Merged

The following models were included in the merge:
* [allura-org/TQ2.5-14B-Neon-v1](https://huggingface.co/allura-org/TQ2.5-14B-Neon-v1)
* [allura-org/TQ2.5-14B-Sugarquill-v1](https://huggingface.co/allura-org/TQ2.5-14B-Sugarquill-v1)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
base_model: allura-org/TQ2.5-14B-Sugarquill-v1
dtype: bfloat16
merge_method: slerp
parameters:
  t:
  - value: 0.7
slices:
- sources:
  - layer_range: [0, 48]
    model: allura-org/TQ2.5-14B-Neon-v1
  - layer_range: [0, 48]
    model: allura-org/TQ2.5-14B-Sugarquill-v1
```