File size: 2,247 Bytes
3c02e72
 
e2dc3d4
 
 
 
 
 
 
 
 
23d21dd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
license: creativeml-openrail-m
language:
- en
tags:
- not-for-all-audiences
- art
- nsfw
- conversational
- chat
- llama2-2-13B
---

### Model Description
A LORA trained on the book 120 Days of Sodom. It Is intended for heavy NSFW content. Many users might find it disturbing.

<!-- Provide a longer summary of what this model is. -->



- **Developed by:** Zattio770
- **Model type:** large language model LORA
- **License:**
- **Finetuned from model [optional]:** MythoMax-L2-13b, Llama-13B

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->



### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

This model is finetuned for heavy NSFW roleplay, while can still an assistant. (Might not be very helpfull)



### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Be safe, I am not responsible for what this lora outputs. Do anything you want. I don't care


## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

Model might have bias to NSFW due to the large % of NSFW data in the training set.







## Training Details

### Training Data
raw text
The 120 Days of Sodom by Marquis de Sade(1904)
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->


### Training Procedure
4bit MythoMaxL2
Rank: 128
Alpha: 256
Batch size: 128
Micro Batch: 1
Cutoff: 256
Epoch: 3
Learning Rate: 3e-4

Overlap Length: 64
Prefer Newline: 128

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->




#### Training Hyperparameters

- **Training regime:** BF16, QLoRA, constant LR 5e-5 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->







### Compute Infrastructure

Hardware Type: 3080Ti
Hours used: 10
OogaBooga