Triangle104
commited on
Commit
•
653be60
1
Parent(s):
ad1ea08
Update README.md
Browse files
README.md
CHANGED
@@ -117,6 +117,29 @@ model-index:
|
|
117 |
This model was converted to GGUF format from [`Orion-zhen/Qwen2.5-7B-Instruct-Uncensored`](https://huggingface.co/Orion-zhen/Qwen2.5-7B-Instruct-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
118 |
Refer to the [original model card](https://huggingface.co/Orion-zhen/Qwen2.5-7B-Instruct-Uncensored) for more details on the model.
|
119 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
120 |
## Use with llama.cpp
|
121 |
Install llama.cpp through brew (works on Mac and Linux)
|
122 |
|
|
|
117 |
This model was converted to GGUF format from [`Orion-zhen/Qwen2.5-7B-Instruct-Uncensored`](https://huggingface.co/Orion-zhen/Qwen2.5-7B-Instruct-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
118 |
Refer to the [original model card](https://huggingface.co/Orion-zhen/Qwen2.5-7B-Instruct-Uncensored) for more details on the model.
|
119 |
|
120 |
+
---
|
121 |
+
Model details:
|
122 |
+
-
|
123 |
+
This model is an uncensored fine-tune version of Qwen2.5-7B-Instruct.
|
124 |
+
However, I can still notice that though uncensored, the model fails to
|
125 |
+
generate detailed descriptions on certain extreme scenarios, which might
|
126 |
+
be associated with deletion on some pretrain datasets in Qwen's
|
127 |
+
pretraining stage.
|
128 |
+
|
129 |
+
Traning details
|
130 |
+
-
|
131 |
+
I used SFT + DPO to ensure uncensorment as well as trying to maintain original model's capabilities.
|
132 |
+
|
133 |
+
SFT:
|
134 |
+
NobodyExistsOnTheInternet/ToxicQAFinal
|
135 |
+
anthracite-org/kalo-opus-instruct-22k-no-refusal
|
136 |
+
|
137 |
+
DPO:
|
138 |
+
Orion-zhen/dpo-toxic-zh
|
139 |
+
unalignment/toxic-dpo-v0.2
|
140 |
+
Crystalcareai/Intel-DPO-Pairs-Norefusals
|
141 |
+
|
142 |
+
---
|
143 |
## Use with llama.cpp
|
144 |
Install llama.cpp through brew (works on Mac and Linux)
|
145 |
|