Edit model card

NeuralDaredevil-8B-abliterated

image/jpeg

This is a DPO fine-tune of mlabonne/Daredevil-8-abliterated, trained on one epoch of mlabonne/orpo-dpo-mix-40k. The DPO fine-tuning successfully recovers the performance loss due to the abliteration process, making it an excellent uncensored model.

πŸ”Ž Applications

NeuralDaredevil-8B-abliterated performs better than the Instruct model on my tests.

You can use it for any application that doesn't require alignment, like role-playing. Tested on LM Studio using the "Llama 3" and "Llama 3 v2" presets.

⚑ Quantization

Thanks to QuantFactory, ZeroWw, Zoyd, solidrust, and tarruda for providing these quants.

πŸ† Evaluation

Open LLM Leaderboard

NeuralDaredevil-8B is the best-performing uncensored 8B model on the Open LLM Leaderboard (MMLU score).

image/png

Nous

Evaluation performed using LLM AutoEval. See the entire leaderboard here.

Model Average AGIEval GPT4All TruthfulQA Bigbench
mlabonne/NeuralDaredevil-8B-abliterated πŸ“„ 55.87 43.73 73.6 59.36 46.8
mlabonne/Daredevil-8B πŸ“„ 55.87 44.13 73.52 59.05 46.77
mlabonne/Daredevil-8B-abliterated πŸ“„ 55.06 43.29 73.33 57.47 46.17
NousResearch/Hermes-2-Theta-Llama-3-8B πŸ“„ 54.28 43.9 72.62 56.36 44.23
openchat/openchat-3.6-8b-20240522 πŸ“„ 53.49 44.03 73.67 49.78 46.48
meta-llama/Meta-Llama-3-8B-Instruct πŸ“„ 51.34 41.22 69.86 51.65 42.64
meta-llama/Meta-Llama-3-8B πŸ“„ 45.42 31.1 69.95 43.91 36.7

🌳 Model family tree

image/png

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mlabonne/Daredevil-8B"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
19,764
Safetensors
Model size
8.03B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mlabonne/NeuralDaredevil-8B-abliterated

Finetunes
4 models
Merges
41 models
Quantizations
17 models

Dataset used to train mlabonne/NeuralDaredevil-8B-abliterated

Spaces using mlabonne/NeuralDaredevil-8B-abliterated 8

Collection including mlabonne/NeuralDaredevil-8B-abliterated

Evaluation results