llama3-42b-v0-iMat-GGUF

Quantized from fp32 with love. All credits to Charles Goddard for the original model.

  • Weighted quantizations were calculated using groups_merged.txt with 105 chunks (recommended amount for this file) and n_ctx=512. Special thanks to jukofyork for sharing this process

For more information on the pruning technique utilized in this model: https://arxiv.org/abs/2403.17887

Brief rundown of iMatrix quant performance

All quants are verified working prior to uploading to repo for your safety and convenience.

Tip: Pick a size that can fit in your GPU while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.

FP16 model card can be found here

Downloads last month
100
GGUF
Model size
43.2B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Collection including InferenceIllusionist/llama3-42b-v0-iMat-GGUF