unsloth-Qwen3-Coder-30B-A3B-Instruct-qx6-mlx
Based on the benchmark results, qx6 would be best suited for:
Primary Task: OpenBookQA
Why OpenBookQA is the Strength:
qx6 achieves 0.432 on OpenBookQA, which is the highest score among all models in this dataset This represents a 0.012 improvement over the baseline (bf16 at 0.420) and 0.002 better than qm68 (0.430) This is a significant advantage for knowledge-based reasoning tasks
Secondary Strengths:
BoolQ
qx6 scores 0.881, which is the highest among all quantized models This indicates exceptional performance on boolean reasoning questions
Arc_Challenge
qx6 scores 0.422, which is equal to the baseline (bf16 at 0.422) Shows perfect performance matching the full precision model on challenging questions
Task Suitability Analysis:
Best Suited Tasks:
OpenBookQA - Strongest performer BoolQ - Highest among quantized models Arc_Challenge - Perfect performance (matches baseline) PIQA - 0.724 (very good performance)
Other Tasks Where qx6 Performs Well:
HellaSwag - 0.546 (solid performance) Arc_Easy - 0.532 (decent performance) Winogrande - 0.576 (strongest among quantized models for this task) General reasoning - Very balanced performance across most tasks
Limitations:
Weakest in Arc_Easy compared to some other variants (0.532 vs 0.537 for bf16) Slightly below baseline on some metrics due to its 6-bit quantization strategy
Recommendation:
Use qx6 when knowledge-based reasoning and boolean logic are critical, particularly for applications involving:
Educational assessment systems Knowledge-intensive question answering Tasks requiring both factual knowledge and logical reasoning Scenarios where OpenBookQA performance is the primary concern
The model excels at combining factual recall (OpenBookQA) with logical reasoning (BoolQ), making it ideal for applications like educational AI, research assistants, and knowledge-based question-answering systems. Its ability to match the baseline performance on Arc_Challenge while excelling in OpenBookQA makes it particularly valuable for tasks requiring both broad knowledge and logical processing capabilities.
Best for: Educational AI, research assistants, fact-based question answering systems where OpenBookQA performance is paramount.
This model unsloth-Qwen3-Coder-30B-A3B-Instruct-qx6-mlx was converted to MLX format from unsloth/Qwen3-Coder-30B-A3B-Instruct using mlx-lm version 0.26.3.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("unsloth-Qwen3-Coder-30B-A3B-Instruct-qx6-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 131
Model tree for nightmedia/unsloth-Qwen3-Coder-30B-A3B-Instruct-qx6-mlx
Base model
Qwen/Qwen3-Coder-30B-A3B-Instruct