language:
- zh
- en
license: gpl-3.0
tags:
- qwen
- uncensored
- llama-cpp
- gguf-my-repo
base_model: Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
datasets:
- NobodyExistsOnTheInternet/ToxicQAFinal
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- Orion-zhen/dpo-toxic-zh
- unalignment/toxic-dpo-v0.2
- Crystalcareai/Intel-DPO-Pairs-Norefusals
pipeline_tag: text-generation
model-index:
- name: Qwen2.5-7B-Instruct-Uncensored
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 72.04
name: strict accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 35.83
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 1.36
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 7.05
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.58
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 38.07
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
Triangle104/Qwen2.5-7B-Instruct-Uncensored-Q4_K_S-GGUF
This model was converted to GGUF format from Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
Model details:
This model is an uncensored fine-tune version of Qwen2.5-7B-Instruct. However, I can still notice that though uncensored, the model fails to generate detailed descriptions on certain extreme scenarios, which might be associated with deletion on some pretrain datasets in Qwen's pretraining stage.
Traning details
I used SFT + DPO to ensure uncensorment as well as trying to maintain original model's capabilities.
SFT: NobodyExistsOnTheInternet/ToxicQAFinal anthracite-org/kalo-opus-instruct-22k-no-refusal
DPO: Orion-zhen/dpo-toxic-zh unalignment/toxic-dpo-v0.2 Crystalcareai/Intel-DPO-Pairs-Norefusals
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-Uncensored-Q4_K_S-GGUF --hf-file qwen2.5-7b-instruct-uncensored-q4_k_s.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-Uncensored-Q4_K_S-GGUF --hf-file qwen2.5-7b-instruct-uncensored-q4_k_s.gguf -c 2048
Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
git clone https://github.com/ggerganov/llama.cpp
Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1
flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
cd llama.cpp && LLAMA_CURL=1 make
Step 3: Run inference through the main binary.
./llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-Uncensored-Q4_K_S-GGUF --hf-file qwen2.5-7b-instruct-uncensored-q4_k_s.gguf -p "The meaning to life and the universe is"
or
./llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-Uncensored-Q4_K_S-GGUF --hf-file qwen2.5-7b-instruct-uncensored-q4_k_s.gguf -c 2048