--- license: gpl-3.0 datasets: - NobodyExistsOnTheInternet/ToxicQAFinal - anthracite-org/kalo-opus-instruct-22k-no-refusal - Orion-zhen/dpo-toxic-zh - unalignment/toxic-dpo-v0.2 - Crystalcareai/Intel-DPO-Pairs-Norefusals language: - zh - en base_model: - Qwen/Qwen2.5-7B-Instruct pipeline_tag: text-generation tags: - qwen - uncensored --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Qwen2.5-7B-Instruct-Uncensored-GGUF This is quantized version of [Orion-zhen/Qwen2.5-7B-Instruct-Uncensored](https://huggingface.co/Orion-zhen/Qwen2.5-7B-Instruct-Uncensored) created using llama.cpp # Original Model Card # Qwen2.5-7B-Instruct-Uncensored This model is an uncensored fine-tune version of Qwen2.5-7B-Instruct. However, I can still notice that though uncensored, the model fails to generate detailed descriptions on certain extreme scenarios, which might be associated with deletion on some pretrain datasets in Qwen's pretraining stage. Check out my roleplay&writing enhanced model based on this model: [Orion-zhen/Meissa-Qwen2.5-7B-Instruct](https://huggingface.co/Orion-zhen/Meissa-Qwen2.5-7B-Instruct) ## Traning details I used SFT + DPO to ensure uncensorment as well as trying to maintain original model's capabilities. - SFT: - NobodyExistsOnTheInternet/ToxicQAFinal - anthracite-org/kalo-opus-instruct-22k-no-refusal - DPO: - Orion-zhen/dpo-toxic-zh - unalignment/toxic-dpo-v0.2 - Crystalcareai/Intel-DPO-Pairs-Norefusals