AI & ML interests

None defined yet.

Ujjwal-Tyagi 
posted an update 6 days ago
view post
Post
349
We have now LTX 2.3 with more better visual quality and richer sound, check it out! Lightricks/LTX-2.3
Ujjwal-Tyagi 
posted an update 17 days ago
view post
Post
2856
Public reports allege that Anthropic gobbled up trillions of tokens of copyrighted material and public data to build their castle. 🏰📄 Now that they're sitting on top, they're begging for special laws to protect their profits while pulling the ladder up behind them. 🪜🚫

But the hypocrisy meter just broke! 📉 They are accusing Chinese labs like DeepSeek, Minimax, and Kimi of "huge distillation attacks. The Reality is that You can't just loot the entire internet's library, lock the door, and then sue everyone else for reading through the window. Stop trying to gatekeep the tech you didn't own in the first place. Read the complete article on it: https://huggingface.co/blog/Ujjwal-Tyagi/the-dark-underbelly-of-anthropic
  • 3 replies
·
Ujjwal-Tyagi 
posted an update 24 days ago
view post
Post
221
Qwen 3.5 Model is here! Supporting 1m context length by default, It is giving much good performance and competitive to Claude Opus 4.6, Qwen/Qwen3.5-397B-A17B, here it's GGUF: unsloth/Qwen3.5-397B-A17B-GGUF, Follow me and turn on the notification for the latest news!
AdinaY 
posted an update 27 days ago
view post
Post
3283
MiniMax M2.5 is now available on the hub 🚀

MiniMaxAI/MiniMax-M2.5

✨ 229B - Modified MIT license
✨37% faster than M2.1
✨ ~$1/hour at 100 TPS
  • 2 replies
·
AdinaY 
posted an update 28 days ago
Ujjwal-Tyagi 
posted an update 28 days ago
view post
Post
3019
GLM 5 is insane, it ranks #4 Globally!
·
AdinaY 
posted an update 29 days ago
view post
Post
3667
Game on 🎮🚀

While Seedance 2.0’s videos are all over the timeline, DeepSeek quietly pushed a new model update in its app.

GLM-5 from Z.ai adds more momentum.

Ming-flash-omni from Ant Group , MiniCPM-SALA from OpenBMB
, and the upcoming MiniMax M2.5 keep the heat on 🔥

Spring Festival is around the corner,
no one’s sleeping!

✨ More releases coming, stay tuned
https://huggingface.co/collections/zh-ai-community/2026-february-china-open-source-highlights
AdinaY 
posted an update 29 days ago
view post
Post
3891
Ming-flash-omni 2.0 🚀 New open omni-MLLM released by Ant Group

inclusionAI/Ming-flash-omni-2.0

✨ MIT license
✨ MoE - 100B/6B active
✨ Zero-shot voice cloning + controllable audio
✨ Fine-grained visual knowledge grounding
  • 2 replies
·
AdinaY 
posted an update about 1 month ago
view post
Post
747
LLaDA 2.1 is out 🔥 A new series of MoE diffusion language model released by AntGroup

inclusionAI/LLaDA2.1-mini
inclusionAI/LLaDA2.1-flash

✨LLaDA2.1-mini: 16B - Apache2.0
✨LLaDA2.1-flash: 100B - Apache2.0
✨Both delivers editable generation, RL-trained diffusion reasoning and fast inference
  • 2 replies
·
AdinaY 
posted an update about 1 month ago
view post
Post
2582
AI for science is moving fast🚀

Intern-S1-Pro 🔬 a MoE multimodal scientific reasoning model from Shanghai AI Lab

internlm/Intern-S1-Pro

✨ 1T total / 22B active
✨ Apache 2.0
✨ SoTA scientific reasoning performance
✨ FoPE enables scalable modeling of long physical time series (10⁰–10⁶)
  • 2 replies
·
AdinaY 
posted an update about 1 month ago
view post
Post
1378
✨ China’s open source AI ecosystem has entered a new phase

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3

One year after the “DeepSeek Moment,” open source has become the default. Models, research, infrastructure, and deployment are increasingly shared to support large-scale, system-level integration.

This final blog examines how leading Chinese AI organizations are evolving ,and what this implies for the future of open source.
AdinaY 
posted an update about 1 month ago
view post
Post
393
GLM just entered the OCR field🔥

zai-org/GLM-OCR

✨ 0.9B
✨ MIT licensed
✨ Multimodal GLM-V architecture
✨ #1 on OmniDocBench v1.5 (94.62)
AdinaY 
posted an update about 1 month ago
AdinaY 
posted an update about 1 month ago
view post
Post
1123
What a week 🤯

Following DeepSeek, Kimi, Qwen, Baidu, and Ant Group, Unitree Robotics
has now released a VLA model on the hub too!

unitreerobotics/UnifoLM-VLA-Base
Ujjwal-Tyagi 
posted an update about 1 month ago
AdinaY 
posted an update about 1 month ago
view post
Post
305
LongCat-Flash-Lite🔥 a non-thinking MoE model released by Meituan LongCat team.

meituan-longcat/LongCat-Flash-Lite

✨ Total 68.5B / 3B active - MIT license
✨ 256k context
✨ Faster inference with N-gram embeddings
AdinaY 
posted an update about 1 month ago
view post
Post
284
Ant Group is going big on robotics 🤖

They just dropped their first VLA and depth perception foundation model on huggingface.

✨ LingBot-VLA :
- Trained on 20k hours of real-world robot data
- 9 robot embodiments
- Clear no-saturation scaling laws
- Apache 2.0

Model: https://huggingface.co/collections/robbyant/lingbot-vla
Paper:
A Pragmatic VLA Foundation Model (2601.18692)

✨ LingBot-Depth:
- Metric-accurate 3D from noisy, incomplete depth
- Masked Depth Modeling (self-supervised)
- RGB–depth alignment, works with <5% sparse depth
- Apache 2.0

Model: https://huggingface.co/collections/robbyant/lingbot-depth
Paper:
Masked Depth Modeling for Spatial Perception (2601.17895)