zhaozitian
commited on
Commit
•
fe5b12d
1
Parent(s):
61d167d
Update README.md
Browse files
README.md
CHANGED
@@ -10,10 +10,15 @@ language:
|
|
10 |
## This model is a fine-tuned Llama2-13b-chat-hf model with Japanese dataset with LoRA.
|
11 |
# This model is finetuned by the joint efforts of Sparticle Inc. and A. I. Hakusan Inc.
|
12 |
|
13 |
-
The training set of this model
|
|
|
|
|
|
|
|
|
|
|
14 |
For inference, please follow the instructions in https://github.com/tloen/alpaca-lora/ .
|
15 |
-
## Training procedure
|
16 |
|
|
|
17 |
|
18 |
The following `bitsandbytes` quantization config was used during training:
|
19 |
- load_in_8bit: True
|
|
|
10 |
## This model is a fine-tuned Llama2-13b-chat-hf model with Japanese dataset with LoRA.
|
11 |
# This model is finetuned by the joint efforts of Sparticle Inc. and A. I. Hakusan Inc.
|
12 |
|
13 |
+
The training set of this model contains:
|
14 |
+
|
15 |
+
5% of randomly chosen data from llm-japanese-dataset by izumi-lab.
|
16 |
+
|
17 |
+
Japanese-alpaca-lora dataset, retrieved from https://github.com/masa3141/japanese-alpaca-lora/tree/main
|
18 |
+
|
19 |
For inference, please follow the instructions in https://github.com/tloen/alpaca-lora/ .
|
|
|
20 |
|
21 |
+
## Training procedure
|
22 |
|
23 |
The following `bitsandbytes` quantization config was used during training:
|
24 |
- load_in_8bit: True
|