brucethemoose
commited on
Commit
•
0b6d68c
1
Parent(s):
0475128
Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,9 @@ pipeline_tag: text-generation
|
|
14 |
|
15 |
https://github.com/yule-BUAA/MergeLM
|
16 |
|
17 |
-
https://github.com/cg123/mergekit/tree/dare
|
|
|
|
|
18 |
|
19 |
***
|
20 |
|
|
|
14 |
|
15 |
https://github.com/yule-BUAA/MergeLM
|
16 |
|
17 |
+
https://github.com/cg123/mergekit/tree/dare'
|
18 |
+
|
19 |
+
24GB GPUs can run these models at 45K-75K context with exllamav2 I go into more detail in this [Reddit post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/)
|
20 |
|
21 |
***
|
22 |
|