Upload README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,25 @@
|
|
1 |
|
2 |
# Llama3-70B-Chinese-Chat-AWQ-32k
|
3 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
## 模型描述
|
5 |
本仓库提供了在[shenzhi-wang全参数微调的Llama3-70B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-70B-Chinese-Chat)基础上进行的4位AWQ量化版本。
|
6 |
原始模型是基于Llama3-70B模型,在中文聊天任务上进行了微调,以提升其在处理中文对话任务的能力。
|
|
|
1 |
|
2 |
# Llama3-70B-Chinese-Chat-AWQ-32k
|
3 |
|
4 |
+
## Model Description
|
5 |
+
This repository provides a 4-bit AWQ quantized version based on the full-parameter fine-tuned Llama3-70B-Chinese-Chat model by shenzhi-wang (https://huggingface.co/shenzhi-wang/Llama3-70B-Chinese-Chat).
|
6 |
+
The original model is based on the Llama3-70B model, which has been fine-tuned for Chinese chat tasks to enhance its ability to handle Chinese dialogue tasks.
|
7 |
+
Additionally, we have included an optional configuration file to support extending the context length from the original 8k to 32k. This enables the model to process longer text sequences, making it suitable for scenarios that require richer contextual information.
|
8 |
+
|
9 |
+
### Quantization
|
10 |
+
We have used 4-bit AWQ quantization technology to reduce the model's weight precision. Preliminary tests show that the model's performance has been maintained relatively well. The quantized model can run in environments with limited resources.
|
11 |
+
|
12 |
+
### Context Extension
|
13 |
+
To support longer contexts, we have added a configuration file named "config-32k.json". When you need to process text lengths that exceed the original context limit, you can enable this feature by simply replacing the configuration file.
|
14 |
+
Please note that as this is an experimental feature, using longer context lengths may affect the model's performance. It is recommended that you test based on your actual usage scenarios.
|
15 |
+
(By default, the original "config.json" from the Llama3 is used, which has an 8k context. To enable the 32k context length, replace the "config.json" in the model files with "config-32k.json". The effect is uncertain; please test yourself.)
|
16 |
+
|
17 |
+
## Original Model Link
|
18 |
+
https://huggingface.co/shenzhi-wang/Llama3-70B-Chinese-Chat
|
19 |
+
Thanks to the open-source community for their contributions to the Sinicization of Llama3.
|
20 |
+
|
21 |
+
--------------------------------------
|
22 |
+
|
23 |
## 模型描述
|
24 |
本仓库提供了在[shenzhi-wang全参数微调的Llama3-70B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-70B-Chinese-Chat)基础上进行的4位AWQ量化版本。
|
25 |
原始模型是基于Llama3-70B模型,在中文聊天任务上进行了微调,以提升其在处理中文对话任务的能力。
|