File size: 1,145 Bytes
0c4f3fe b1a191b 0c4f3fe |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
---
license: apache-2.0
base_model:
- allura-org/Teleut-7b
tags:
- roleplay
- conversational
---
# Teleut 7b RP
[cute boygirlthing pending]
A roleplay-focused LoRA finetune of Teleut 7b. Methodology and hyperparams inspired by [SorcererLM](https://huggingface.co/rAIfle/SorcererLM-8x22b-bf16).
## Dataset
The worst mix of data you've ever seen. Like, seriously, you do not want to see the things that went into this model. It's bad.
## Recommended Settings
Chat template: ChatML
Recommended samplers (not the be-all-end-all, try some on your own!):
- Temp 1.03 / TopK 200 / MinP 0.05 / TopA 0.2
- Temp 1.03 / TFS 0.75 / TopA 0.3
## Hyperparams
General:
- Epochs = 2
- LR = 6e-5
- LR Scheduler = Cosine
- Optimizer = Paged AdamW 8bit
- Effective batch size = 12
LoRA:
- Rank = 16
- Alpha = 32
- Dropout = 0.25 (Inspiration: [Slush](https://huggingface.co/crestf411/Q2.5-32B-Slush))
## Credits
Thanks to the people who created the data. I would credit you, but that would be cheating ;)
Thanks to all Allura members, especially Toasty, for testing and emotional support ilya /platonic
NO thanks to Infermatic. They suck at hosting models |