3

Deneb-Qwen3-Radiation-0.6B

Deneb-Qwen3-Radiation-0.6B is a reasoning-focused model fine-tuned on Qwen for Abliterated Reasoning and polished token probabilities, enhancing balanced multilingual generation across mathematics and general-purpose reasoning. It specializes in event-driven logic, structured analysis, and precise probabilistic modeling—making it an ideal tool for researchers, educators, and developers working with uncertainty and structured reasoning.

GGUF: https://huggingface.co/prithivMLmods/Deneb-Qwen3-Radiation-0.6B-GGUF

Key Features

  1. Abliterated Reasoning Enhanced reasoning precision through polished token probability distributions in Qwen and similar models, ensuring balanced and context-aware outputs.

  2. Event Simulation & Logical Analysis Models random events, probability-driven reasoning, and logical decision-making with strong consistency.

  3. Multilingual Mathematical & General-Purpose Problem Solving Delivers robust performance in math, probability, and structured multilingual tasks, enabling wide applicability in global research and education.

  4. Hybrid Symbolic-Probabilistic Thinking Combines structured logic, probabilistic inference, and reasoning fluency, providing accuracy across uncertainty-driven tasks.

  5. Structured Output Mastery Generates well-structured outputs in LaTeX, Markdown, JSON, CSV, and YAML, supporting technical workflows and data-driven research.

  6. Optimized Lightweight Footprint Compact 0.6B parameter size, deployable on edge devices, offline clusters, and mid-range GPUs, while maintaining reasoning quality.

Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Deneb-Qwen3-Radiation-0.6B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Simulate the probability of rolling two dice and getting a sum greater than 9. Show the reasoning."

messages = [
    {"role": "system", "content": "You are a reasoning tutor skilled in probability, logic, and multilingual problem-solving."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Intended Use

  • Balanced multilingual reasoning and probability modeling
  • Event simulation, uncertainty analysis, and structured problem solving
  • Educational and research-focused reasoning tasks
  • Lightweight deployment in constrained environments
  • Technical content and structured data generation

Limitations

  • Focused on reasoning and mathematics—less suited for creative writing
  • Smaller size (0.6B) may limit depth on highly complex, multi-step tasks
  • Prioritizes structured reasoning and probabilistic accuracy over conversational or emotional tone.
Downloads last month
33
Safetensors
Model size
596M params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Deneb-Qwen3-Radiation-0.6B

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(314)
this model
Quantizations
3 models

Collection including prithivMLmods/Deneb-Qwen3-Radiation-0.6B