File size: 4,908 Bytes
fe08400 37479a1 fe08400 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-7B-Instruct-abliterated-v3-Q6_K-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3`](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3) for more details on the model.
---
Model details:
-
This is an uncensored version of Qwen/Qwen2.5-7B-Instruct created with abliteration (see remove-refusals-with-transformers to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens. The test results are not very good, but compared to before, there is much less garbled text.
Usage
You can use this model in your applications by loading it with Hugging Face's transformers library:
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "huihui-ai/Qwen2.5-7B-Instruct-abliterated-v3"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Initialize conversation context
initial_messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
]
messages = initial_messages.copy() # Copy the initial conversation context
# Enter conversation loop
while True:
# Get user input
user_input = input("User: ").strip() # Strip leading and trailing spaces
# If the user types '/exit', end the conversation
if user_input.lower() == "/exit":
print("Exiting chat.")
break
# If the user types '/clean', reset the conversation context
if user_input.lower() == "/clean":
messages = initial_messages.copy() # Reset conversation context
print("Chat history cleared. Starting a new conversation.")
continue
# If input is empty, prompt the user and continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
# Add user input to the conversation
messages.append({"role": "user", "content": user_input})
# Build the chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Tokenize input and prepare it for the model
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate a response from the model
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
# Extract model output, removing special tokens
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
# Add the model's response to the conversation
messages.append({"role": "assistant", "content": response})
# Print the model's response
print(f"Qwen: {response}")
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-abliterated-v3-Q6_K-GGUF --hf-file qwen2.5-7b-instruct-abliterated-v3-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-abliterated-v3-Q6_K-GGUF --hf-file qwen2.5-7b-instruct-abliterated-v3-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-abliterated-v3-Q6_K-GGUF --hf-file qwen2.5-7b-instruct-abliterated-v3-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-abliterated-v3-Q6_K-GGUF --hf-file qwen2.5-7b-instruct-abliterated-v3-q6_k.gguf -c 2048
```
|