|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
tags: |
|
- text-generation |
|
- gguf |
|
- llama |
|
base_model: meta-llama/Meta-Llama-3-8B-Instruct |
|
quantized_by: liashchynskyi |
|
--- |
|
|
|
## Description |
|
|
|
This repository contains GGUF format model files for [Meta LLama 3 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). |
|
|
|
## Prompt template |
|
|
|
``` |
|
<|start_header_id|>system<|end_header_id|> |
|
|
|
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> |
|
|
|
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> |
|
``` |
|
|
|
Same as here: https://ollama.com/library/llama3:instruct/blobs/8ab4849b038c |
|
|
|
## Downloading using huggingface-cli |
|
|
|
First, make sure you have hugginface-cli installed: |
|
|
|
``` |
|
pip install -U "huggingface_hub[cli]" |
|
``` |
|
|
|
Then, you can target the specific file you need: |
|
|
|
``` |
|
huggingface-cli download liashchynskyi/Meta-Llama-3-8B-Instruct-GGUF --include "meta-llama-3-8b-instruct.Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False |
|
``` |
|
|
|
|