Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

m-ric 
posted an update 2 days ago
view post
Post
3762
STOP EVERYTHING NOW - we might finally have a radical architecture improvement over Transformers!!! 🚨

A lone scientist just proposed Tiny Recursive Model (TRM), and it is literally the most impressive model that I've seen this year.

➡️ Tiny Recursive Model is 7M parameters
➡️ On ARC-AGI, it beats flagship models like Gemini-2.5-pro

Consider how wild this is: Gemini-2.5-pro must be over 10,000x bigger
and had 1,000 as many authors 😂 (Alexia is alone on the paper)

What's this sorcery?
In short: it's a very tiny Transformers, but it loops over itself at two different frequencies, updating two latent variables: one for the proposed answer and one for the reasoning.

@AlexiaJM started from the paper Hierarchical Reasoning Model, published a few months ago, that already showed breakthrough improvement on AGI for its small size (27M)

Hierarchical Reasoning Model had introduced one main feature:
🔎 Deep supervision
In their model, one part (here one layer) would run at high frequency, and another would be lower frequency, running only every n steps.

They had used a recurrent architecture, where these layers would repeat many times ; but to make it work they had to do many approximations, including not fully backpropagating the loss through all layers.

Alexia studied what was useful and what wasn't, and cleaned the architecture as follows :
Why use a recurrent architecture, when you can just make it a loop?
➡️ She made the network recursive, looping over itself

Why use 2 latent variables ?
➡️ She provides a crystal clear explanation : the one that changes frequently is the reasoning, the one that changes at low frequency is the proposed answer.
➡️ She runs ablation studies to validate that 2 is indeed optimal.

This new setup is a much more elegant way to process reasoning than generating huge chains of tokens as all flagship models currently do.

This might be the breakthrough we've been awaiting for so long!
  • 1 reply
·
hba123 
posted an update 2 days ago
view post
Post
3771
🤖 What if building your own robot arm costs less than £220?

For years, robotics has been locked behind high prices and complex systems.
So we decided to change that.

Today, we’re open-sourcing Ark-Bot — a fully 3D-printed, 6-DOF robot arm that works seamlessly with our Python robotics library, Ark.

And yes… It’s only £215.86 to build.

🧠ArkBot Specs 🧠

1️⃣ Reach: 1 meter
2️⃣ Weight: 2.6 kg
3️⃣ Payload: 1.8 kg 💪
4️⃣ DOF: 6
5️⃣ Input Voltage: DC 12V

🤟Fully 3D-printable & open-source
🤟Integrated with Ark — no ROS required


📹 We’ve also released a video showing the full assembly process — because robotics should be something everyone can learn, build, and improve on.

👩‍🎓 With Ark-Bot, anyone — from students to AI researchers — can experiment with embodied AI, robot learning, and control algorithms on real hardware, affordably.

If you could control a 1-meter robot arm from your laptop for under £220…
👉 What would you build first?

🔗https://github.com/Robotics-Ark/ark_bot
🎥 https://www.youtube.com/watch?v=Kuk4pC0EaEw&feature=youtu.be
  • 2 replies
·
mlabonne 
posted an update 2 days ago
view post
Post
3084
LiquidAI/LFM2-8B-A1B just dropped!

8.3B params with only 1.5B active/token 🚀

> Quality ≈ 3–4B dense, yet faster than Qwen3-1.7B
> MoE designed to run on phones/laptops (llama.cpp / vLLM)
> Pre-trained on 12T tokens → strong math/code/IF
  • 1 reply
·
piercus 
posted an update 2 days ago
Severian 
posted an update 2 days ago
view post
Post
2883
MLX port of BDH (Baby Dragon Hatchling) is up!

I’ve ported the BDH ( https://github.com/pathwaycom/bdh ) model to MLX for Apple Silicon. It’s a faithful conversion of the PyTorch version: same math, same architecture (byte-level vocab, shared weights across layers, ReLU sparsity, RoPE attention with Q=K), with MLX-friendly APIs and a detailed README explaining the few API-level differences and why results are equivalent.

Code, docs, and training script are ready to use. You may need to adjust the training script a bit to fit your own custom dataset. Only tested on M4 so far, but should work perfect for any M1/M2/M3 users out there.

I’m currently training this MLX build on my Internal Knowledge Map (IKM) dataset Severian/Internal-Knowledge-Map
Training’s underway; expect a day or so before I publish weights. When it’s done, I’ll upload the checkpoint to Hugging Face for anyone to test.

Repo: https://github.com/severian42/BDH-MLX
HF model (coming soon): Severian/BDH-MLX

If you try it on your own data, feedback and PRs are welcome.
AdinaY 
posted an update 1 day ago
view post
Post
1940
At the close of the National Holiday🇨🇳, Antgroup drops a new SoTA model.

Ling-1T 🔥 the trillion-parameter flagship of the Ling 2.0 series.

inclusionAI/Ling-1T

✨1T total / 50B active params per token
✨20T+ reasoning-dense tokens (Evo-CoT)
✨128K context via YaRN
✨FP8 training: 15%+ faster, same precision as BF16
✨Hybrid Syntax-Function-Aesthetics reward for front-end & visual generation
AdamF92 
posted an update 3 days ago
view post
Post
2306
Hi, I just published research paper that's introducing my Reactive Transformer (RxT) architecture. I would be grateful if you could check it and upvote on HuggingFace Daily Papers - Reactive Transformer (RxT) -- Stateful Real-Time Processing for Event-Driven Reactive Language Models (2510.03561)

Architecture is based on stateful real-time processing with innovational asynchronous memory update. Instead of reprocessing all the conversation history for each message, it's processing only single query with all the context moved to dedicated memory layers. Memory is updated after generating the answer, so it's not influencing latency - in tests, time to first token was almost the same as generating a single token. It has also better quality/accuracy in multi-turn dialogue than the same size stateless decoder-only model.

Initial experiments were small scale (12M to 160M params models trained on simple synthetic datasets), but just now I'm starting training of bigger 270M params model on real data



Collection: ReactiveAI/reactive-transformer-poc-rxt-alpha-supervised-models-68e4004a4a59366e01a7b86f
Profile: ReactiveAI
Ethank01 
posted an update about 2 hours ago
view post
Post
148
No invitation code needed — create AI videos with one click!
Experience Sora 2, Veo 3, and Wan 2.2 all in one place on iMini.
👉 Try it here: https://imini.com/
giadap 
posted an update about 21 hours ago
view post
Post
318
🌎 AI ethics and sustainability are two sides of the same coin.

In our new blog post with Dr. Sasha Luccioni, we argue that separating them (as is too often the case) means missing the bigger picture of how AI systems impact both people and the planet.

Ethical and sustainable AI development can’t be pursued in isolation. The same choices that affect who benefits or is harmed by AI systems also determine how much energy and resources they consume.

We explore how two key concepts, evaluation and transparency, can serve as bridges between these domains:

📊 Evaluation, by moving beyond accuracy or performance metrics to include environmental and social costs, as we’ve done with tools like the AI Energy Score.

🔍 Transparency, by enabling reproducibility, accountability, and environmental reporting through open tools like the Environmental Transparency Space.

AI systems mirror our priorities. If we separate ethics from sustainability, we risk building technologies that are efficient but unjust, or fair but unsustainable.

Read our blog post here: https://huggingface.co/blog/sasha/ethics-sustainability

AIEnergyScore/Leaderboard
sasha/environmental-transparency
jwgu 
posted an update 1 day ago
view post
Post
1216
🎉 NEW RELEASES: Cosmos Predict 2.5 and Transfer 2.5

Cosmos Predict 2.5:
- Combines Text2World, Image2World, and Video2World
- Multimodal, future-state video prediction

Cosmos Transfer 2.5:
- High-fidelity multicontrol world simulations
- Inputs: RGB, depth, segmentation—blended seamlessly

These updates boost development of autonomous vehicles, robotics, and video analytics.

Don’t miss Jensen Huang’s keynote at NVIDIA GTC Washington, D.C. on 10/28 to hear the latest in physical AI.

📺 Watch live: https://nvda.ws/4pUjF4x

🔗 Try Predict 2.5:
https://nvda.ws/4otReZZ

🔗 Try Transfer 2.5:
https://nvda.ws/46GEx7T