miquliz-120b - Q4 GGUF

Description

This repo contains Q4_K_S and Q4_K_M GGUF format model files for Wolfram Ravenwolf's miquliz-120b.

Prompt template: Mistral

[INST] {prompt} [/INST]

Provided files

Name Quant method Bits Size
miquliz-120b.Q4_K_S.gguf Q4_K_S 4 66.81 GB
miquliz-120b.Q4_K_M.gguf Q4_K_M 4 70.64 GB

Note: HF does not support uploading files larger than 50GB. Therefore the files are uploaded as split files.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Inference API (serverless) has been turned off for this model.

Model tree for NanoByte/miquliz-120b-Q4-GGUF

Finetuned
(1)
this model