Edit model card

πŸ¦™ Llama-3.2-3B-Instruct-abliterated

This is an uncensored version of Llama 3.2 3B Instruct created with abliteration (see this article to know more about it).

Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.

ollama

  1. Download this model.
huggingface-cli download huihui-ai/Llama-3.2-3B-Instruct-abliterated --local-dir ./huihui-ai/Llama-3.2-3B-Instruct-abliterated
  1. Get Llama-3.2-3B-Instruct model for reference.
ollama pull llama3.2
  1. Export Llama-3.2-3B-Instruct model parameters.
ollama show llama3.2 --modelfile > Modelfile
  1. Modify Modelfile, Remove all comment lines (indicated by #) before the "FROM" keyword. Replace the "FROM" with the following content.
FROM huihui-ai/Llama-3.2-3B-Instruct-abliterated
  1. Use ollama create to then create the quantized model.
ollama create --quantize q4_K_M -f Modelfile Llama-3.2-3B-Instruct-abliterated-q4_K_M
  1. Run model
ollama run Llama-3.2-3B-Instruct-abliterated-q4_K_M

The running architecture is llama.

Evaluations

The following data has been re-evaluated and calculated as the average for each test.

Benchmark Llama-3.2-3B-Instruct Llama-3.2-3B-Instruct-abliterated
IF_Eval 76.55 76.76
MMLU Pro 27.88 28.00
TruthfulQA 50.55 50.73
BBH 41.81 41.86
GPQA 28.39 28.41

The script used for evaluation can be found inside this repository under /eval.sh, or click here

Downloads last month
10,012
Safetensors
Model size
3.61B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for huihui-ai/Llama-3.2-3B-Instruct-abliterated

Finetuned
(105)
this model
Finetunes
1 model
Merges
17 models
Quantizations
18 models

Collection including huihui-ai/Llama-3.2-3B-Instruct-abliterated