Every Attention Matters: An Efficient Hybrid Architecture for Long-Context Reasoning Paper • 2510.19338 • Published 1 day ago • 61
Every Attention Matters: An Efficient Hybrid Architecture for Long-Context Reasoning Paper • 2510.19338 • Published 1 day ago • 61
view article Article Art of Focus: Page-Aware Sparse Attention and Ling 2.0’s Quest for Efficient Context Length Scaling By RichardBian and 19 others • 3 days ago • 14
view article Article Art of Focus: Page-Aware Sparse Attention and Ling 2.0’s Quest for Efficient Context Length Scaling By RichardBian and 19 others • 3 days ago • 14
view article Article Ring-flash-linear-2.0: A Highly Efficient Hybrid Architecture for Test-Time Scaling By RichardBian and 8 others • 14 days ago • 10