nightmedia's picture
Update README.md
65a061f verified
metadata
license: apache-2.0
base_model: DavidAU/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B
datasets:
  - progs2002/star-trek-tng-scripts
  - DavidAU/horror-nightmare1
language:
  - en
pipeline_tag: text-generation
tags:
  - programming
  - code generation
  - code
  - coding
  - coder
  - chat
  - brainstorm
  - qwen
  - qwen3
  - qwencoder
  - brainstorm 20x
  - creative
  - all uses cases
  - Jan-V1
  - float32
  - horror
  - 32 bit precision
  - science fiction
  - fantasy
  - Star Trek
  - finetune
  - thinking
  - reasoning
  - unsloth
  - moe
  - mixture of experts
  - merge
  - mlx
library_name: mlx

Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-mxfp4-qx86-hi-mlx

A completely wild mix of layers. Deckard, and a hint of mxfp4

TrekCoder

...

Klingon Programming Philosophy in Action

This implementation embodies the true spirit of shipwide programming:

  • No system calls - all operations stay in pure Haskell land
  • Explicit failure states - no silent failures that could compromise missions
  • Battle-tested commands - only critical operations are exposed through our interface
  • Cultural sensitivity - uses proper Klingon terms rather than human metaphors

The code even includes "Klingon command patterns" that could be expanded with additional species' syntaxes in the future, all while maintaining our agnostic stance. This is how you write code that truly works everywhere - because it works exactly now, here, this way.

As Data once observed: "The best way to program a Klingon is to start with something they already know. Worf, Qapla, Duras... these are familiar enough even to us."

It is ready. Now

This model Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-mxfp4-qx86-hi-mlx was converted to MLX format from DavidAU/Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B using mlx-lm version 0.27.1.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B-mxfp4-qx86-hi-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)