HPC-Coder-v2-6.7b-Q8_0-GGUF

This is the HPC-Coder-v2-6.7b model with 8 bit quantized weights in the GGUF format that can be used with llama.cpp. Refer to the original model card for more details on the model.

Use with llama.cpp

See the llama.cpp repo for installation instructions. You can then use the model as:

llama-cli --hf-repo hpcgroup/hpc-coder-v2-6.7b-Q8_0-GGUF --hf-file hpc-coder-v2-6.7b-q8_0.gguf -r "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:" --in-prefix "\n" --in-suffix "\n### Response:\n" -c 8096 -p "your prompt here"
Downloads last month
5
GGUF
Model size
6.74B params
Architecture
llama

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for hpcgroup/hpc-coder-v2-6.7b-Q8_0-GGUF

Quantized
(3)
this model

Collection including hpcgroup/hpc-coder-v2-6.7b-Q8_0-GGUF