|
--- |
|
base_model: |
|
- mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated |
|
- grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter |
|
- agentlans/Llama3-vodka |
|
- NousResearch/Meta-Llama-3.1-8B-Instruct |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
|
|
--- |
|
|
|
# Llama3.1-vodka |
|
|
|
- Input: text only |
|
- Output: text only |
|
|
|
This model is like vodka. It aims to be pure, potent, and versatile. |
|
|
|
- Pure: shouldn't greatly affect Llama 3.1 Instruct's capabilities and writing style except for uncensoring |
|
- Potent: it's a merge of abliterated models - it should stay uncensored after merging and finetuning |
|
- Versatile: basically Llama 3.1 Instruct except uncensored - drink it straight, mix it, finetune it, and make cocktails |
|
|
|
Please enjoy responsibly. |
|
|
|
Note that this model may still censor at times. If that's undesirable, tell the AI to be more uncensored and uninhibited. |
|
|
|
## Safety and risks |
|
|
|
- Excessive consumption is bad for your health |
|
- The model can produce harmful, offensive, or inappropriate content if prompted to do so |
|
- The model has weakened safeguards and a lack of moral and ethical judgements |
|
- The user takes responsibility for all outputs produced by the model |
|
- It is recommended to use the model in controlled environments where its risks can be safely managed |
|
|
|
## Models used: |
|
|
|
- [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated) |
|
- `Llama-3.1-8B-Instruct-abliterated_via_adapter2` (Llama 3.1 adapted version of [grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter](https://huggingface.co/grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter)) |
|
- `Llama3.1-vodka-ported2` (Llama 3.1 adapted verison of [agentlans/Llama3-vodka](https://huggingface.co/agentlans/Llama3-vodka)) |
|
|
|
The above models were merged onto [NousResearch/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3.1-8B-Instruct) using the "task arithmetic" merge method. The model merges and LoRA extractions were done using [mergekit](https://github.com/arcee-ai/mergekit). |
|
|