huihui-ai/QwQ-32B-Coder-Fusion-8020

Overview

QwQ-32B-Coder-Fusion-8020 is a mixed model that combines the strengths of two powerful Qwen-based models: huihui-ai/QwQ-32B-Preview-abliterated and huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated.
The weights are blended in a 8:2 ratio, with 80% of the weights from QwQ-32B-Preview-abliterated and 20% from the abliterated Qwen2.5-Coder-32B-Instruct-abliterated model. Although it's a simple mix, the model is usable, and no gibberish has appeared. This is an experiment. I test the 9:1, 8:2, and 7:3 ratios separately to see how much impact they have on the model.

Please refer to the mixed source code.

Model Details

ollama

You can use huihui_ai/qwq-fusion:32b-8020 directly,

ollama run huihui_ai/qwq-fusion:32b-8020

Other proportions can be obtained by visiting huihui_ai/qwq-fusion.

Downloads last month
101
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for huihui-ai/QwQ-32B-Coder-Fusion-8020

Base model

Qwen/Qwen2.5-32B
Finetuned
(5)
this model
Finetunes
1 model
Quantizations
4 models

Collection including huihui-ai/QwQ-32B-Coder-Fusion-8020