Edit model card

GGUF quants of TeeZee/Kyllene-34B-v1.1, remeber to set your max context length to proper length for your hardware, 4096 is fine. Default context length is 200k so it will eat RAM or VRAM like crazy if left unchecked.

Downloads last month
204
GGUF
Model size
34.4B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including TeeZee/Kyllene-34B-v1.1-GGUF