|
--- |
|
license: apache-2.0 |
|
base_model: |
|
- allura-org/Teleut-7b |
|
tags: |
|
- roleplay |
|
- conversational |
|
--- |
|
# Teleut 7b RP |
|
[cute boygirlthing pending] |
|
|
|
A roleplay-focused LoRA finetune of Teleut 7b. Methodology and hyperparams inspired by [SorcererLM](https://huggingface.co/rAIfle/SorcererLM-8x22b-bf16) and [Slush](https://huggingface.co/crestf411/Q2.5-32B-Slush). |
|
|
|
## Dataset |
|
The worst mix of data you've ever seen. Like, seriously, you do not want to see the things that went into this model. It's bad. |
|
|
|
## Recommended Settings |
|
Chat template: ChatML |
|
Recommended samplers (not the be-all-end-all, try some on your own!): |
|
- Temp 1.03 / TopK 200 / MinP 0.05 / TopA 0.2 |
|
- Temp 1.03 / TFS 0.75 / TopA 0.3 |
|
|
|
## Hyperparams |
|
### General |
|
- Epochs = 2 |
|
- LR = 6e-5 |
|
- LR Scheduler = Cosine |
|
- Optimizer = Paged AdamW 8bit |
|
- Effective batch size = 12 |
|
### LoRA |
|
- Rank = 16 |
|
- Alpha = 32 |
|
- Dropout = 0.25 (Inspiration: [Slush](https://huggingface.co/crestf411/Q2.5-32B-Slush)) |
|
|
|
## Credits |
|
Humongous thanks to the people who created the data. I would credit you all, but that would be cheating ;) |
|
Big thanks to all Allura members, especially Toasty, for testing and emotional support ilya /platonic |
|
NO thanks to Infermatic. They suck at hosting models |