--- base_model: GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct language: - en - id - jv - su license: llama3 tags: - llama-cpp - gguf --- # Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q4_0-gguf This model was converted to GGUF format from [`GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct`](https://huggingface.co/GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct) using llama.cpp. Refer to the [original model card](https://huggingface.co/GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct) for more details on the model. ## Use with llama.cpp ### CLI: ```bash llama-cli --hf-repo Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q4_0-gguf --hf-file llama3-8b-cpt-sahabatai-v1-instruct.q4_0.gguf -p "Your prompt here" ``` ### Server: ```bash llama-server --hf-repo Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q4_0-gguf --hf-file llama3-8b-cpt-sahabatai-v1-instruct.q4_0.gguf -c 2048 ``` ## Model Details - **Quantization Type:** q4_0 - **Original Model:** [GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct](https://huggingface.co/GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct) - **Format:** GGUF