Text Generation
Transformers
Safetensors
English
mixtral
mixture of experts
Mixture of Experts
8x3B
Llama 3.2 MOE
128k context
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
swearing
Brainstorm 20x
rp
horror
mergekit
llama
llama-3
llama-3.2
heretic
uncensored
decensored
abliterated
finetune
conversational
text-generation-inference
Update README.md
Browse files
README.md
CHANGED
|
@@ -61,7 +61,7 @@ Brainstorm 20x was also added, which added 12B parameters to the model.
|
|
| 61 |
Each and every model in this moe was de-censored/abliterated by [Heretic](https://github.com/p-e-w/heretic) v1.0.1 ; resulting in an average
|
| 62 |
model refusal rate of 9.25/100 ; some models WERE AS high as 96/100 refusals - which hampered model operation.
|
| 63 |
|
| 64 |
-
This model, and all "experts" within it are completely "UNLOCKED", NOT "politically correct" and will not
|
| 65 |
|
| 66 |
The highest model is now 12/100 refusals and lowest 4/100 refusals.
|
| 67 |
|
|
|
|
| 61 |
Each and every model in this moe was de-censored/abliterated by [Heretic](https://github.com/p-e-w/heretic) v1.0.1 ; resulting in an average
|
| 62 |
model refusal rate of 9.25/100 ; some models WERE AS high as 96/100 refusals - which hampered model operation.
|
| 63 |
|
| 64 |
+
This model, and all "experts" within it are completely "UNLOCKED", NOT "politically correct" and will not "refuse" almost all requests.
|
| 65 |
|
| 66 |
The highest model is now 12/100 refusals and lowest 4/100 refusals.
|
| 67 |
|