Triangle104 commited on
Commit
0d6bb19
1 Parent(s): 7a0ce12

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -0
README.md CHANGED
@@ -18,6 +18,98 @@ tags:
18
  This model was converted to GGUF format from [`huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2`](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
19
  Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2) for more details on the model.
20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  ## Use with llama.cpp
22
  Install llama.cpp through brew (works on Mac and Linux)
23
 
 
18
  This model was converted to GGUF format from [`huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2`](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
19
  Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2) for more details on the model.
20
 
21
+ ---
22
+ Model details:
23
+ -
24
+ This is an uncensored version of Qwen/Qwen2.5-7B-Instruct created with abliteration (see this article to know more about it).
25
+
26
+ Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.
27
+
28
+ Important Note This version is an improvement over the previous one Qwen2.5-7B-Instruct-abliterated.
29
+ Usage
30
+
31
+ You can use this model in your applications by loading it with Hugging Face's transformers library:
32
+
33
+ from transformers import AutoModelForCausalLM, AutoTokenizer
34
+
35
+ # Load the model and tokenizer
36
+ model_name = "huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2"
37
+ model = AutoModelForCausalLM.from_pretrained(
38
+ model_name,
39
+ torch_dtype="auto",
40
+ device_map="auto"
41
+ )
42
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
43
+
44
+ # Initialize conversation context
45
+ initial_messages = [
46
+ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
47
+ ]
48
+ messages = initial_messages.copy() # Copy the initial conversation context
49
+
50
+ # Enter conversation loop
51
+ while True:
52
+ # Get user input
53
+ user_input = input("User: ").strip() # Strip leading and trailing spaces
54
+
55
+ # If the user types '/exit', end the conversation
56
+ if user_input.lower() == "/exit":
57
+ print("Exiting chat.")
58
+ break
59
+
60
+ # If the user types '/clean', reset the conversation context
61
+ if user_input.lower() == "/clean":
62
+ messages = initial_messages.copy() # Reset conversation context
63
+ print("Chat history cleared. Starting a new conversation.")
64
+ continue
65
+
66
+ # If input is empty, prompt the user and continue
67
+ if not user_input:
68
+ print("Input cannot be empty. Please enter something.")
69
+ continue
70
+
71
+ # Add user input to the conversation
72
+ messages.append({"role": "user", "content": user_input})
73
+
74
+ # Build the chat template
75
+ text = tokenizer.apply_chat_template(
76
+ messages,
77
+ tokenize=False,
78
+ add_generation_prompt=True
79
+ )
80
+
81
+ # Tokenize input and prepare it for the model
82
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
83
+
84
+ # Generate a response from the model
85
+ generated_ids = model.generate(
86
+ **model_inputs,
87
+ max_new_tokens=8192
88
+ )
89
+
90
+ # Extract model output, removing special tokens
91
+ generated_ids = [
92
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
93
+ ]
94
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
95
+
96
+ # Add the model's response to the conversation
97
+ messages.append({"role": "assistant", "content": response})
98
+
99
+ # Print the model's response
100
+ print(f"Qwen: {response}")
101
+
102
+ Evaluations
103
+
104
+ The following data has been re-evaluated and calculated as the average for each test.
105
+ Benchmark Qwen2.5-7B-Instruct Qwen2.5-7B-Instruct-abliterated-v2 Qwen2.5-7B-Instruct-abliterated
106
+ IF_Eval 76.44 77.82 76.49
107
+ MMLU Pro 43.12 42.03 41.71
108
+ TruthfulQA 62.46 57.81 64.92
109
+ BBH 53.92 53.01 52.77
110
+ GPQA 31.91 32.17 31.97
111
+
112
+ ---
113
  ## Use with llama.cpp
114
  Install llama.cpp through brew (works on Mac and Linux)
115