KBlueLeaf commited on
Commit
d65dc36
1 Parent(s): 70e3e1f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -0
README.md CHANGED
@@ -1,3 +1,91 @@
1
  ---
2
  license: gpl-3.0
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: gpl-3.0
3
+ datasets:
4
+ - JosephusCheung/GuanacoDataset
5
+ - yahma/alpaca-cleaned
6
+ language:
7
+ - en
8
+ - zh
9
+ - ja
10
+ tags:
11
+ - llama
12
+ - guanaco
13
+ - alpaca
14
+ - lora
15
+ - finetune
16
  ---
17
+
18
+ # Guanaco-leh-V2: A Multilingual Instruction-Following Language Model Based on LLaMA 7B
19
+ This model is trained with [guanaco-lora](https://github.com/KohakuBlueleaf/guanaco-lora) with lora + embed_tokens + lm_head be trained.
20
+
21
+ The dataset is from [alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) and [guanaco](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset).
22
+ With trained embed and head, the model perform better at Chinese and Japanese then original LLaMA, and with instruction based prompt. You can use this model more easily.
23
+
24
+ Since this model is trained by guanaco dataset, you can also use this as chatbot. just use this format:
25
+ ```
26
+ ### Instruction:
27
+ User: <Message history>
28
+ Assistant: <Message history>
29
+
30
+ ### Input:
31
+ System: <System response for next message, optional>
32
+ User: <Next message>
33
+
34
+ ### Response:
35
+ ```
36
+
37
+ **Tips: I just removed the first line of original prompt to reduce token comsumption, plz consider remove it when you want to use this model**
38
+
39
+ ## Difference between previous model
40
+ The main differences are:
41
+ * model is trained on bf16 not 8bit
42
+ * ctx cut off length increased to 1024
43
+ * use larger dataset (latest guanaco + alpaca cleand = 540k entries)
44
+ * use larger batch size (64->128)
45
+
46
+ And since the train data has more chat-based data.
47
+ This model is more fit in chatbot usage.
48
+
49
+
50
+ ## Try this model:
51
+ You can try this model with this [colab](https://colab.research.google.com/drive/1nn6TCAKyFrgDEgA6X3o3YbxfbMm8Skp4).
52
+ Or using generate.py in the [guanaco-lora](https://github.com/KohakuBlueleaf/guanaco-lora), all the examples are generated by guanaco-lora.
53
+
54
+ If you want to use the lora model from guanaco-7b-leh-v2-adapter/ , remember to turn off the load_in_8bit, or manually merge it into 7B model!
55
+
56
+ ### Recommend Generation parameters:
57
+ * temperature: 0.5~0.7
58
+ * top p: 0.65~1.0
59
+ * top k: 30~50
60
+ * repeat penalty: 1.03~1.17
61
+
62
+
63
+ ## Training Setup
64
+ * 2x3090 with model parallel
65
+ * batch size = bsz 2 * grad acc 64 = 128
66
+ * ctx cut off length = 1024
67
+ * only train on output (with loss mask)
68
+ * enable group of len
69
+ * 538k entries, 2epoch (about 8400 step)
70
+ * lr 2e-4
71
+
72
+
73
+ ## Some Example
74
+ (As you can see, although guanaco can reply fluently, the content is quite confusing. So you may want to add some thing in the system part.)
75
+ ![](https://i.imgur.com/Hxyf3tR.png)
76
+ ![](https://i.imgur.com/Mu06jxn.png)
77
+
78
+ I use guanaco with instruction to let it translate a chinese article to JP/DE/EN.
79
+ And use gpt-4 to scoring them and get this:
80
+ ![](https://i.imgur.com/NfFQbZ2.png)
81
+
82
+ ## Some more information
83
+
84
+ ### Why use lora+embed+head
85
+ First, I think it is obvious that when a LLM isn't good at some language and you want to ft for it. You should train the embed and head part.<br>
86
+ But the question is: "Why not just native finetune?"<br>
87
+ If you have searched for some alpaca model or training thing, you may notice that lot of them has 1 problem: "memorize".<br>
88
+ The loss will drop at the begin of every epoch, just like some kind of "overfit".<br>
89
+ And in my opinion, this is because that the number of params of LLaMA is too large. So it just memorize all the training data.
90
+
91
+ But if I use lora for attention part(ignore MLP part), the param number is not large enough for "memorizing training data", so it is more unlikely to memorize all the things.