File size: 1,488 Bytes
0c4f3fe
 
 
 
 
 
 
 
 
bb1bf78
0c4f3fe
dab3f7b
0c4f3fe
 
 
 
 
dab3f7b
0c4f3fe
 
 
 
a02dc57
640688b
 
a02dc57
0c4f3fe
dab3f7b
0c4f3fe
 
 
 
 
dab3f7b
0c4f3fe
 
 
 
 
dab3f7b
 
0c4f3fe
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
license: apache-2.0
base_model:
- allura-org/Teleut-7b
tags:
- roleplay
- conversational
---
# Teleut 7b RP
![image/png](https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/2y6PHgWe4ewoMFlgn-p3d.png)

A roleplay-focused LoRA finetune of Teleut 7b. Methodology and hyperparams inspired by [SorcererLM](https://huggingface.co/rAIfle/SorcererLM-8x22b-bf16) and [Slush](https://huggingface.co/crestf411/Q2.5-32B-Slush).

## Dataset
The worst mix of data you've ever seen. Like, seriously, you do not want to see the things that went into this model. It's bad.

## Recommended Settings
Chat template: ChatML  
Recommended samplers (not the be-all-end-all, try some on your own!):
- Temp 1.03 / TopK 200 / MinP 0.05 / TopA 0.2
- Temp 1.03 / TFS 0.75 / TopA 0.3

## Quants
- [Static GGUFs](https://huggingface.co/allura-org/Teleut-7b-RP-GGUF)
- [Imatrix GGUFs (thanks bart!)](https://huggingface.co/bartowski/Teleut-7b-RP-GGUF)

## Hyperparams
### General
- Epochs = 2
- LR = 6e-5
- LR Scheduler = Cosine
- Optimizer = Paged AdamW 8bit
- Effective batch size = 12
### LoRA
- Rank = 16
- Alpha = 32
- Dropout = 0.25 (Inspiration: [Slush](https://huggingface.co/crestf411/Q2.5-32B-Slush))

## Credits
Humongous thanks to the people who created the data. I would credit you all, but that would be cheating ;)  
Big thanks to all Allura members, especially Toasty, for testing and emotional support ilya /platonic  
NO thanks to Infermatic. They suck at hosting models