This is a pure sub-quadtratic linear attention 8B parameter model, linearized from the Meta Llama 3.1 8B model.

Details on this model and how to train your own are provided at: https://github.com/HazyResearch/lolcats/tree/lolcats-scaled

Demo

Here is a quick GitHub GIST that will help you run inference on the model checkpoints.

Paper

See the paper page: https://huggingface.co/papers/2410.10254

Downloads last month
16
Inference Examples
Unable to determine this model's library. Check the docs .

Space using hazyresearch/lolcats-llama-3.1-8b-distill 1

Collection including hazyresearch/lolcats-llama-3.1-8b-distill