Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- not-for-all-audiences
|
4 |
+
license: apache-2.0
|
5 |
+
---
|
6 |
+
|
7 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/ZXmxNKGaHUrqjdS1I3GkL.png)
|
8 |
+
|
9 |
+
|
10 |
+
## Thespis v0.7
|
11 |
+
This model works best with internet style RP using standard markup with asterisks surrounding actions and no quotes around dialogue.
|
12 |
+
Current work is looking at creating a custom DPO dataset as well as expanding the training data to include more niche interests.
|
13 |
+
|
14 |
+
External Datasets Used:
|
15 |
+
|
16 |
+
* Pure-Dove Dataset
|
17 |
+
* Claude Multiround 30k
|
18 |
+
* OpenOrcaSlim
|
19 |
+
* Augmental Dataset
|
20 |
+
|
21 |
+
|
22 |
+
DPO was done using a few generic datasets available on Hugging Face.
|
23 |
+
|
24 |
+
DPO Data:
|
25 |
+
|
26 |
+
* Intel/orca_dpo_pairs
|
27 |
+
* NobodyExistsOnTheInternet/ToxicDPOqa
|
28 |
+
|
29 |
+
|
30 |
+
|
31 |
+
Works with standard chat format for Ooba or SillyTavern.
|
32 |
+
|
33 |
+
## Prompt Format: Chat ( The default Ooba template and Silly Tavern Template )
|
34 |
+
```
|
35 |
+
{System Prompt}
|
36 |
+
|
37 |
+
Username: {Input}
|
38 |
+
BotName: {Response}
|
39 |
+
Username: {Input}
|
40 |
+
BotName: {Response}
|
41 |
+
|
42 |
+
```
|
43 |
+
## Ooba ( Set it to Chat, select a character and go. )
|
44 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/HTl7QlAZcqe2hV8rwh4DG.png)
|
45 |
+
|
46 |
+
## Silly Tavern Settings ( Default )
|
47 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/ajny8P0LdW0nFtghpPbfB.png)
|
48 |
+
```
|