Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,48 @@
|
|
1 |
---
|
2 |
license: other
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: other
|
3 |
---
|
4 |
+
### Introduction
|
5 |
+
Basiclly an update to the old attempt of [vicuna-chinese-replication-beta](https://huggingface.co/keyfan/vicuna-chinese-replication-beta)
|
6 |
+
* We adopted an curriculum-learning like approch, starting from simple QAs to reasoning-intensive coding & mathamatical problems. Coincidentally, [Ziya](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1) adopted the same idea during SFT period.
|
7 |
+
* The base model was changed from [chinese-llama](https://huggingface.co/ziqingyang/chinese-llama-lora-13b) to [chinese-llama-plus](https://huggingface.co/ziqingyang/chinese-llama-plus-lora-13b). However, as observed by [BiLLa](https://github.com/Neutralzz/BiLLa), continue training on Chinese-only corpus significantly increases its perplexity on English corpus, which in turns undermines its abilities in fields like mathematical calculation in our preliminary experiment. The subject of continuing-training is under-studied, while using bilingual corpus may be a better alternative as shown so far.
|
8 |
+
* We changed to the Vicuna v1.1 conversative template and included more CoT training data.
|
9 |
+
|
10 |
+
Again, this is for research purpose only. There's no guarantee for its performance. All credits to the original authors of LLaMA and Chinese-LLaMA.
|
11 |
+
|
12 |
+
Compared with previous release, new model improves on coding and reasoning problem. However it still suffers from hallucinations and perform poorly on Chinese domain-specific problem, e.g. chinese literature and idioms.
|
13 |
+
|
14 |
+
### Usage
|
15 |
+
|
16 |
+
We use exactly the Vicuna template for training and inference. Sample code as below.
|
17 |
+
|
18 |
+
```
|
19 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
20 |
+
|
21 |
+
checkpoint = "keyfan/vicuna-chinese-replication-v1.1"
|
22 |
+
|
23 |
+
tokenizer = AutoTokenizer.from_pretrained(checkpoint, use_fast=False)
|
24 |
+
model = AutoModelForCausalLM.from_pretrained(checkpoint).cuda()
|
25 |
+
|
26 |
+
template = ("A chat between a curious human and an artificial intelligence assistant. "
|
27 |
+
"The assistant gives helpful, detailed, and polite answers to the human's questions. "
|
28 |
+
"USER: {}\nASSISTANT:")
|
29 |
+
question = template.format("Who was the president of the United States in 1955?")
|
30 |
+
inputs = tokenizer.encode(question, return_tensors="pt").cuda()
|
31 |
+
outputs = model.generate(inputs, do_sample=True, temperature=0.2, max_new_tokens=512)
|
32 |
+
print(tokenizer.decode(outputs[0]))
|
33 |
+
```
|
34 |
+
|
35 |
+
### Evaluation
|
36 |
+
|
37 |
+
* Result on the [Chinese-LLaMA-Alpaca devset](https://github.com/ymcui/Chinese-LLaMA-Alpaca/tree/main/examples) compared with the result of Alpaca-Plus-13B. For simplity, we only sample one answer for each question without any cherry-picking. We used the template as provided in their repo. GPT-4 have strong bias for more detailed answers, so the score may not be consistent with human evaluation.
|
38 |
+
|
39 |
+
| Model | Macro-Average | QA | OQA | REASONING | LITERATURE | ENTERTAINMENT | GENERATION | TRANSLATION | CODE | ETHICS |
|
40 |
+
| - | - | - | - | - | - | - | - | - | - | - |
|
41 |
+
| Alpaca-Plus-13B | 77.3 | 70 | 74 | 70 | **80** | **77** | 82 | **89** | 64 | **90** |
|
42 |
+
| ours | **82.4** | **81** | **87** | **88** | 73 | **78** | **85** | 83 | **83** | 84 |
|
43 |
+
|
44 |
+
* Result on the newly released [C-Eval test set](https://cevalbenchmark.com/index.html#home) with 5-shot. We slightly modified [MOSS's code](https://github.com/SJTU-LIT/ceval/blob/main/code/evaluator_series/evaluators/moss.py) from ceval codebase by moving the '答案:' (Answer: ) suffix from the end of question to the beginning of the chatbot response.
|
45 |
+
|
46 |
+
| Average | Avg(Hard) | STEM | Social Science | Humanities | Others |
|
47 |
+
| - | - | - | - | - | - |
|
48 |
+
| 37.0 | 29.5 | 34.6 | 44.5 | 35.7 | 35.9 |
|