Outcome Accuracy is Not Enough: Aligning the Reasoning Process of Reward Models Paper ⢠2602.04649 ⢠Published 11 days ago ⢠11
Jamba2 Collection Jamba2 is a highly-efficient open source family of language models built for maximum reliability and steerability in the enterprise. ⢠3 items ⢠Updated Jan 8 ⢠5
š§ LFM2.5 Collection Collection of Instruct, Base, and Japanese LFM2.5-1.2B models. ⢠22 items ⢠Updated 13 days ago ⢠87
i3-Series Collection Note: The models are listed in the default order set by Hugging Face, so the latest model appears at the botSeries ⢠9 items ⢠Updated 18 days ago ⢠2
MemMamba: Rethinking Memory Patterns in State Space Model Paper ⢠2510.03279 ⢠Published Sep 28, 2025 ⢠73
Less is More: Recursive Reasoning with Tiny Networks Paper ⢠2510.04871 ⢠Published Oct 6, 2025 ⢠507
Towards Greater Leverage: Scaling Laws for Efficient Mixture-of-Experts Language Models Paper ⢠2507.17702 ⢠Published Jul 23, 2025 ⢠6