File size: 986 Bytes
a1cd529 7ed7567 a1cd529 7ed7567 a1cd529 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- text-generation
- gguf
- llama
base_model: meta-llama/Meta-Llama-3-8B-Instruct
quantized_by: liashchynskyi
---
## Description
This repository contains GGUF format model files for [Meta LLama 3 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
## Prompt template
```
<|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
Same as here: https://ollama.com/library/llama3:instruct/blobs/8ab4849b038c
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you need:
```
huggingface-cli download liashchynskyi/Meta-Llama-3-8B-Instruct-GGUF --include "meta-llama-3-8b-instruct.Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
|