Update the model card
Browse files
README.md
CHANGED
@@ -7,14 +7,28 @@ tags:
|
|
7 |
- traditional_chinese
|
8 |
- alpaca
|
9 |
---
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
|
|
|
|
|
|
|
|
12 |
the repois model adpater
|
13 |
if you want to use the merged checkpoint(adapter+original model) repo: https://huggingface.co/weiren119/traditional_chinese_qlora_llama2_merged
|
14 |
|
15 |
-
## Finetuned dataset
|
16 |
-
- NTU NLP Lab's translated alapaca-tw_en dataset: alpaca-tw_en-align.json: [ntunpllab](https://github.com/ntunlplab/traditional-chinese-alpaca) translate Stanford Alpaca 52k dataset
|
17 |
-
|
18 |
## Use which pretrained model
|
19 |
- NousResearch: https://huggingface.co/NousResearch/Llama-2-7b-chat-hf
|
20 |
|
|
|
7 |
- traditional_chinese
|
8 |
- alpaca
|
9 |
---
|
10 |
+
# Traditional Chinese Llama2
|
11 |
+
|
12 |
+
- github repo: https://github.com/MIBlue119/traditional_chinese_llama2/
|
13 |
+
- Practice to finetune Llama2 on traditional chinese instruction dataset at Llama2 chat model.
|
14 |
+
I use qlora and the alpaca translated dataset to finetune llama2-7b model at rtx3090(24GB VRAM) with 9 hours.
|
15 |
+
|
16 |
+
Thanks for these references:
|
17 |
+
- NTU NLP Lab's alapaca dataset: [alpaca-tw_en-align.json](./alpaca-tw-en-align.json): [ntunpllab](https://github.com/ntunlplab/traditional-chinese-alpaca) translate Stanford Alpaca 52k dataset
|
18 |
+
- [Chinese Llama 2 7B train.py](https://github.com/LinkSoul-AI/Chinese-Llama-2-7b/blob/main/train.py)
|
19 |
+
- [Load the pretrained model in 4-bit precision and Set training with LoRA according to hf's trl lib](https://github.com/lvwerra/trl/blob/main/examples/scripts/sft_trainer.py): QLoRA finetuning
|
20 |
+
|
21 |
+
## Resources
|
22 |
+
- traditional chinese qlora finetuned Llama2 merge model: [weiren119/traditional_chinese_qlora_llama2_merged](https://huggingface.co/weiren119/traditional_chinese_qlora_llama2_merged)
|
23 |
+
- traditional chinese qlora adapter model: [weiren119/traditional_chinese_qlora_llama2](https://huggingface.co/weiren119/traditional_chinese_qlora_llama2)
|
24 |
|
25 |
+
## Online Demo
|
26 |
+
- [Run the qlora finetuned model at colab](https://colab.research.google.com/drive/1OYXvhY-8KjEDaGhOLrJe4omjtFgOWjy1?usp=sharing): May need colab pro or colab pro+
|
27 |
+
|
28 |
+
## Notice
|
29 |
the repois model adpater
|
30 |
if you want to use the merged checkpoint(adapter+original model) repo: https://huggingface.co/weiren119/traditional_chinese_qlora_llama2_merged
|
31 |
|
|
|
|
|
|
|
32 |
## Use which pretrained model
|
33 |
- NousResearch: https://huggingface.co/NousResearch/Llama-2-7b-chat-hf
|
34 |
|