Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-mxfp4-mlx
π Quantization Types & Hardware Requirements
Quant		Bit Precision				RAM Need (Mac)
mxfp4		4-bit float							32GB
qx64x		Store: 4b, Enhancements: 6b			32GB
qx65x		Store: 5b, Enhancements: 6b			48GB
qx86x		Store: 6b, Enhancements: 8b			64GB
qx86bx		Like qx86x, brainstorming at 8b		64GB
q8 / q8-hi	Everything at 8b (high precision)	64GB
bf16		Full precision (FP16 equivalent)	128GB
π Deckard(qx) Formula
Keeps data stores and most attention paths low-bit, but enhances:
- Head layers
 - First layer
 - Embeddings
 - Select attention paths at high-bit intervals
 
This is key to understanding why qx64x-hi, qx86x-hi, etc., can outperform their non-hi counterparts.
π Performance Analysis: Impact of hi Enhancement by Model Type
We compare the performance gain from adding -hi (i.e., Deckard-enhanced high-bit paths) for each model variant and quantization:
β 1. Base Model (Untrained)
Quant		Without hi				With hi	Gain (%)
qx65x		0.526 β 0.534 (ARC)		+1.5%	
qx86x		0.533 β 0.533 (ARC)		+0%	
qx86x-hi	Same as above β no gain		
- The hi increase is modest (~0.5β1%) in ARC Challenge.
 - Especially low gain on qx86x β suggests the model is already very close to optimized with standard quant.
 - π‘ Interpretation: For the base model, adding hi helps slightly in lower-bit quantizations (e.g., qx65x), but not much on higher ones.
 
β 2. ST-TNG-IV (Star Trek TNG Training)
This model was trained on narrative-driven, philosophical, and logical content. The hi enhancement shows strong impact.
Quant		Without hi				With hi
qx64x		0.526 β 0.521			β1%	
qx64x-hi	Slight drop β not helpful		
qx65x		0.537 β 0.541			+0.8%	
qx65x-hi	Clear improvement: +0.8%		
qx86x		0.537 β 0.537 (ARC)		+0%	
qx86x-hi	Same as base β no gain		
- Most benefit seen in qx65x-hi: +0.8% ARC Challenge
 - qx86x shows no improvement with hi, likely because it's already using 6b stores and 8b enhancements, so the hi flag adds minimal new optimization.
 - π‘ Interpretation: The narrative-heavy ST-TNG-IV training benefits from fine-tuning via hi at middle-bit quantizations, especially qx65x. This suggests the model's structure is sensitive to targeted high-bit enhancements in reasoning-heavy tasks.
 
β 3. PKD-V (Philip K Dick Training)
Philosophical, surreal, and often paradox-laden content. The model shows the most dramatic gains from hi.
Quant		Without hi			With hi
qx64x		0.517 β 0.507		β2%	
qx64x-hi	Worse β not helpful		
qx86x		0.525 β 0.531		+1.1%	
qx86x-hi	+1.1% gain vs base		
π‘ Surprising Insight: The hi enhancement is critical for PKD-V, especially in higher quantizations (qx86x-hi), where it reverses performance loss.
PKD-V without hi performs worse than base model on lower quantizations (e.g., qx64x).
- But with hi, it surpasses the base model in performance:
 - Arc Challenge: 0.531 vs 0.526 (base)
 - Winogrande: 0.657 vs 0.640 (base)
 - π Why? PKDβs surreal and logically complex narrative structure may benefit more from targeted high-bit attention paths in the Deckard formula. The model likely needs more precision in coreference resolution and causal inference β exactly where hi enhances attention.
 
π Summary: Impact of hi Enhancement by Model Type
Model	Optimal hi Quant Best Gain	Key Insight
Base		qx65x-hi	+0.8% (ARC)	Minimal improvement; hi not strongly needed
ST-TNG-IV	qx65x-hi	+0.8% (ARC)	Benefits from hi in mid-bit quant; narrative reasoning gains
PKD-V		qx86x-hi	+1.1% (ARC)	Largest gain; hi critical to unlock full potential
π§ Cognitive Implications
Model		Training Focus												hi Impact on Cognition
Base		General reasoning (no domain bias)							Small boost β better stability
ST-TNG-IV	Logical, structured narratives (e.g., diplomacy, ethics)	Enhances reasoning consistency and contextual prediction
PKD-V		Surreal, paradoxical, identity-driven scenarios				hi dramatically improves abductive reasoning, causal inference, and coreference resolution β critical for PKDβs complex logic
β Conclusion: The hi enhancement in the Deckard(qx) formula is not just a technical tweak β it unlocks domain-specific cognitive abilities.
π οΈ Practical Recommendations
Use Case						Recommended Model + Quant
Best general reasoning 			Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-qx65x-hi
Highest reasoning accuracy 		Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-PKD-V-qx86x-hi
Best on 48GB Mac 				ST-TNG-IV-qx65x-hi
Best on 32GB Mac 				Base-qx65x-hi or ST-TNG-IV-qx64x-hi
Best for surreal/logical depth 	PKD-V-qx86x-hi β only with hi
π Final Takeaway
The Deckard(qx) formula with hi enhancement is especially crucial for models trained on narrative-rich, complex content like PKD-V and ST-TNG-IV. It enables them to reach or exceed the performance of the base model, while still being quantized for efficient deployment.
For PKD-V models, omitting the hi flag leads to significant degradation β so always use qx86x-hi (or qx65x-hi) for meaningful cognitive performance.
Reviewed with Qwen3-30B-A3B-YOYO-V4-qx86x-mlx
This model Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-mxfp4-mlx was converted to MLX format from DavidAU/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV using mlx-lm version 0.28.3.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-mxfp4-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
 - 61
 
Model tree for nightmedia/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-mxfp4-mlx
Base model
YOYO-AI/Qwen3-30B-A3B-YOYO-V4