GGUF
Not-For-All-Audiences
nsfw
Inference Endpoints
Undi95's picture
Create README.md
64ffe25
|
raw
history blame
1.58 kB
---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/M96XjaPN0wRS5mnn6REl9.png)
[HIGHLY EXPERIMENTAL]
Use at your own risk, I'm not responsible for any usage of this model, don't try to do anything this model tell you to do.
Highly uncensored.
If you are censored, it's maybe because of keyword like "assistant", "Factual answer", or other "sweet words" like I call them that trigger the censoring accross all the layer of the model (since they're all trained on some of them in a way).
based off Unholy-12L : This is a test project, uukuguy/speechless-llama2-luban-orca-platypus-13b and jondurbin/spicyboros-13b-2.2 was used for a merge, then, I deleted the first 8 layers to add 8 layers of MLewd at the beginning, and do the same from layers 16 to 20, trying to break all censoring possible, before merging the output with MLewd at 0.33 weight.
<!-- description start -->
## Description
This repo contains fp16 files of Unholy v1.1, an uncensored model.
<!-- description end -->
<!-- description start -->
## Models and loras used
- uukuguy/speechless-llama2-luban-orca-platypus-13b
- jondurbin/spicyboros-13b-2.2
- Undi95/MLewd-L2-13B-v2-3
- ausboss/llama2-13b-supercot-loras
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that completes the request.
### Instruction:
{prompt}
### Response:
```
## Prompt template by default
```
<prompt>
Reply:
```