adding a small gist to help run the demo
#2
by
ariG23498
HF staff
- opened
README.md
CHANGED
@@ -5,4 +5,8 @@ language:
|
|
5 |
|
6 |
This is a pure sub-quadtratic linear attention 8B parameter model, linearized from the Meta Llama 3.1 8B model.
|
7 |
|
8 |
-
Details on this model and how to train your own are provided at: https://github.com/HazyResearch/lolcats/tree/lolcats-scaled
|
|
|
|
|
|
|
|
|
|
5 |
|
6 |
This is a pure sub-quadtratic linear attention 8B parameter model, linearized from the Meta Llama 3.1 8B model.
|
7 |
|
8 |
+
Details on this model and how to train your own are provided at: https://github.com/HazyResearch/lolcats/tree/lolcats-scaled
|
9 |
+
|
10 |
+
## Demo
|
11 |
+
|
12 |
+
Here is a quick [GitHub GIST](https://gist.github.com/ariG23498/45b0c2afc95ca4c4b7cf64fbc161c1e7) that will help you run inference on the model checkpoints.
|