|
--- |
|
license: creativeml-openrail-m |
|
language: |
|
- en |
|
tags: |
|
- not-for-all-audiences |
|
- art |
|
- nsfw |
|
- conversational |
|
- chat |
|
- llama2-2-13B |
|
--- |
|
|
|
### Model Description |
|
A LORA trained on the book 120 Days of Sodom. It Is intended for heavy NSFW content. Many users might find it disturbing. |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
|
|
|
|
- **Developed by:** Zattio770 |
|
- **Model type:** large language model LORA |
|
- **License:** |
|
- **Finetuned from model [optional]:** MythoMax-L2-13b, Llama-13B |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
|
|
|
|
### Direct Use |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
|
This model is finetuned for heavy NSFW roleplay, while can still an assistant. (Might not be very helpfull) |
|
|
|
|
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
|
Be safe, I am not responsible for what this lora outputs. Do anything you want. I don't care |
|
|
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
Model might have bias to NSFW due to the large % of NSFW data in the training set. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Training Details |
|
|
|
### Training Data |
|
raw text |
|
The 120 Days of Sodom by Marquis de Sade(1904) |
|
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
|
|
### Training Procedure |
|
4bit MythoMaxL2 |
|
Rank: 128 |
|
Alpha: 256 |
|
Batch size: 128 |
|
Micro Batch: 1 |
|
Cutoff: 256 |
|
Epoch: 3 |
|
Learning Rate: 3e-4 |
|
|
|
Overlap Length: 64 |
|
Prefer Newline: 128 |
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
|
|
|
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
- **Training regime:** BF16, QLoRA, constant LR 5e-5 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Compute Infrastructure |
|
|
|
Hardware Type: 3080Ti |
|
Hours used: 10 |
|
OogaBooga |