flashvenom
commited on
Commit
•
12aaa7d
1
Parent(s):
cf149e5
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,4 @@
|
|
1 |
Model upload in 4-bit GPTQ version, converted using GPTQ-for-LLaMa; Source model from https://huggingface.co/Peeepy/Airoboros-13b-SuperHOT-8k.
|
2 |
|
3 |
You will need a monkey-patch at inference to use the 8k context, please see patch file present, if you are using a different inference engine (like llama.cpp / exllama) you will need to add the monkey patch there.
|
|
|
|
1 |
Model upload in 4-bit GPTQ version, converted using GPTQ-for-LLaMa; Source model from https://huggingface.co/Peeepy/Airoboros-13b-SuperHOT-8k.
|
2 |
|
3 |
You will need a monkey-patch at inference to use the 8k context, please see patch file present, if you are using a different inference engine (like llama.cpp / exllama) you will need to add the monkey patch there.
|
4 |
+
Patch file present in repo or can be accessed here: https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test/main/llama_rope_scaled_monkey_patch.py
|