Public reports allege that Anthropic gobbled up trillions of tokens of copyrighted material and public data to build their castle. 🏰📄 Now that they're sitting on top, they're begging for special laws to protect their profits while pulling the ladder up behind them. 🪜🚫
But the hypocrisy meter just broke! 📉 They are accusing Chinese labs like DeepSeek, Minimax, and Kimi of "huge distillation attacks. The Reality is that You can't just loot the entire internet's library, lock the door, and then sue everyone else for reading through the window. Stop trying to gatekeep the tech you didn't own in the first place. Read the complete article on it: https://huggingface.co/blog/Ujjwal-Tyagi/the-dark-underbelly-of-anthropic
Qwen 3.5 Model is here! Supporting 1m context length by default, It is giving much good performance and competitive to Claude Opus 4.6, Qwen/Qwen3.5-397B-A17B, here it's GGUF: unsloth/Qwen3.5-397B-A17B-GGUF, Follow me and turn on the notification for the latest news!
One year after the “DeepSeek Moment,” open source has become the default. Models, research, infrastructure, and deployment are increasingly shared to support large-scale, system-level integration.
This final blog examines how leading Chinese AI organizations are evolving ,and what this implies for the future of open source.
✨ Sparse MoE:196B/11B active ✨ Supports up to 256K context ✨ Multi-token prediction for fast decoding (100–300 tok/s) ✨ Runs locally on consumer hardware
They just dropped their first VLA and depth perception foundation model on huggingface. ✨ LingBot-VLA : - Trained on 20k hours of real-world robot data - 9 robot embodiments - Clear no-saturation scaling laws - Apache 2.0