Alessandro Ercolani PRO

giux78

AI & ML interests

NLP, Reinforcement Learning, Semantics, Computational Neuroscience

Recent Activity

reacted to robtacconelli's post with 🚀 about 5 hours ago
🏆 Nacrith: a 135M model that out-compresses everything on natural language What if a tiny LM could compress english text better than _every_ compressor out there — classical or neural, small or large? Nacrith pairs SmolLM2-135M with an ensemble of online predictors and high-precision arithmetic coding. What's inside The standard LLM+arithmetic coding approach wastes ~75% of CDF precision on large vocabularies. Our CDF-24 fix alone recovers 0.5 bpb. On top: a token N-gram that skips the GPU on predictable tokens, an adaptive bias head, llama.cpp backend (7× faster than PyTorch), multi-GPU parallel compression, and a binary file format (NC06) — the first LLM-based binary compressor we know of. Runs on a GTX 1050 Ti. ~500 MB weights, ~1.2 GB VRAM per worker. 💻 Code: https://github.com/robtacconelli/Nacrith-GPU ⭐ Space: https://huggingface.co/spaces/robtacconelli/Nacrith-GPU 📄 Paper: https://huggingface.co/papers/2602.19626 Try it, break it, share your results — all feedback welcome. ⭐ on the repo appreciated! Results across all systems we tested: - alice29.txt → 0.918 bpb (−44% vs CMIX, −20% vs ts_zip) — below the 2nd-order Shannon entropy bound - enwik8 (100 MB) → 0.9389 bpb (−8% vs FineZip/LLMZip's 8B model, −15% vs ts_zip) - Unseen text → 0.723 bpb on a doc published after training cutoff — no memorization, 26% better than FineZip/LLMZip on the same model SmolLM2-135M by https://huggingface.co/HuggingFaceTB
liked a model 21 days ago
mii-llm/nesso-4B
View all activity

Organizations

Rocket AI's profile picture Spaces-explorers's profile picture Blog-explorers's profile picture FairMind's profile picture Business Operating System's profile picture mii-community's profile picture Social Post Explorers's profile picture mii-llm's profile picture Coloss's profile picture nanochat students's profile picture