|
--- |
|
license: apache-2.0 |
|
language: |
|
- zh |
|
- en |
|
--- |
|
|
|
# Llama-3-Chinese-8B-Instruct-v3-GGUF |
|
|
|
<p align="center"> |
|
<a href="https://github.com/ymcui/Chinese-LLaMA-Alpaca-3"><img src="https://ymcui.com/images/chinese-llama-alpaca-3-banner.png" width="600"/></a> |
|
</p> |
|
|
|
This repository contains **Llama-3-Chinese-8B-Instruct-v3-GGUF** (llama.cpp/ollama/tgw, etc. compatible), which is the quantized version of [Llama-3-Chinese-8B-Instruct-v3](https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v3). |
|
|
|
**Note: this is an instruction (chat) model, which can be used for conversation, QA, etc.** |
|
|
|
Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3 |
|
|
|
## Performance |
|
|
|
Metric: PPL, lower is better |
|
|
|
*Note: Unless constrained by memory, we suggest using Q8_0 or Q6_K for better performance.* |
|
|
|
| Quant | Size | PPL | |
|
| :---: | -------: | ------------------: | |
|
| Q2_K | 2.96 GB | 10.0534 +/- 0.13135 | |
|
| Q3_K | 3.74 GB | 6.3295 +/- 0.07816 | |
|
| Q4_0 | 4.34 GB | 6.3200 +/- 0.07893 | |
|
| Q4_K | 4.58 GB | 6.0042 +/- 0.07431 | |
|
| Q5_0 | 5.21 GB | 6.0437 +/- 0.07526 | |
|
| Q5_K | 5.34 GB | 5.9484 +/- 0.07399 | |
|
| Q6_K | 6.14 GB | 5.9469 +/- 0.07404 | |
|
| Q8_0 | 7.95 GB | 5.8933 +/- 0.07305 | |
|
| F16 | 14.97 GB | 5.8902 +/- 0.07303 | |
|
|
|
## Others |
|
|
|
- For full model, please see: https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v3 |
|
- If you have questions/issues regarding this model, please submit an issue through https://github.com/ymcui/Chinese-LLaMA-Alpaca-3 |