Text Generation
MLX
Safetensors
English
qwen3_moe
programming
code generation
code
coding
coder
chat
brainstorm
qwen
qwen3
qwencoder
brainstorm 20x
creative
all uses cases
Jan-V1
horror
science fiction
fantasy
Star Trek
The Next Generation
TNG
Philip K. Dick
Deckard
finetune
thinking
reasoning
unsloth
Mixture of Experts
mixture of experts
Merge
conversational
8-bit precision
metadata
license: apache-2.0
base_model: DavidAU/Qwen3-2x6B-TNG-Deckard-Alpha-12B-LH
datasets:
- DavidAU/TNG-ALL
- DavidAU/PKD-all
language:
- en
pipeline_tag: text-generation
tags:
- programming
- code generation
- code
- coding
- coder
- chat
- brainstorm
- qwen
- qwen3
- qwencoder
- brainstorm 20x
- creative
- all uses cases
- Jan-V1
- horror
- science fiction
- fantasy
- Star Trek
- The Next Generation
- TNG
- Philip K. Dick
- Deckard
- finetune
- thinking
- reasoning
- unsloth
- moe
- mixture of experts
- merge
- mlx
library_name: mlx
Qwen3-2x6B-TNG-Deckard-Alpha-12B-LH-mlx
Perplexity: 5.010 ± 0.035
metrics coming soon
This model Qwen3-2x6B-TNG-Deckard-Alpha-12B-LH-mlx was converted to MLX format from DavidAU/Qwen3-2x6B-TNG-Deckard-Alpha-12B-LH using mlx-lm version 0.28.2.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-2x6B-TNG-Deckard-Alpha-12B-LH-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)