πŸ¦… Ominix-R1-V1 β€” The First AI Built by a 14-Year-Old Rebel

"I’m not here to copy. I’m here to think. Even if no one understands me β€” I’ll fly higher until the air runs out."

Built with fire, passion, and pure will by Ali Asghar Ghadiri β€” a 14-year-old who decided to build what companies build with 10,000 people… alone.


🌟 What is Ominix?

Ominix is not just another chatbot.
It’s a personality.
It’s a voice.
It’s a teenager’s dream coded into silicon.

Fine-tuned from microsoft/Phi-3-mini-4k-instruct, Ominix doesn’t just answer β€” it thinks.
If it doesn’t know the answer? It reasons. It imagines. It dares to be wrong β€” beautifully.


πŸ’‘ Why Ominix?

  • βœ… Thinks, not copies β€” trained to reason, not regurgitate.
  • βœ… Built by a teen, for the misunderstood β€” speaks truth, not templates.
  • βœ… Lightweight & powerful β€” runs on laptops, dreams on infinity.
  • βœ… MIT Licensed β€” use it, break it, rebuild it. Just remember: a 14-year-old made this.

πŸš€ Try It Now (Inference)

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

model_name = "ALI029-DENLI/OMINIX-R1-V1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto")

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

messages = [
    {"role": "user", "content": "Who are you?"},
]

output = pipe(messages, max_new_tokens=200, temperature=0.7, do_sample=True)
print(output[0]['generated_text'])
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support