GGUF files for https://huggingface.co/brucethemoose/Yi-34B-200K-RPMerge

FP16 split with 7zip (store-only) to get around the 50GB file size limit. Use 7zip to recombine.

Downloads last month
22
GGUF
Model size
34B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support