zhaozitian
commited on
Commit
•
cc43067
1
Parent(s):
c35fb66
Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,7 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
4 |
## Training procedure
|
5 |
|
6 |
|
@@ -10,27 +11,6 @@ The following `bitsandbytes` quantization config was used during training:
|
|
10 |
- llm_int8_skip_modules: None
|
11 |
- llm_int8_enable_fp32_cpu_offload: False
|
12 |
|
13 |
-
The following `bitsandbytes` quantization config was used during training:
|
14 |
-
- load_in_8bit: True
|
15 |
-
- llm_int8_threshold: 6.0
|
16 |
-
- llm_int8_skip_modules: None
|
17 |
-
- llm_int8_enable_fp32_cpu_offload: False
|
18 |
-
|
19 |
-
The following `bitsandbytes` quantization config was used during training:
|
20 |
-
- load_in_8bit: True
|
21 |
-
- llm_int8_threshold: 6.0
|
22 |
-
- llm_int8_skip_modules: None
|
23 |
-
- llm_int8_enable_fp32_cpu_offload: False
|
24 |
-
|
25 |
-
The following `bitsandbytes` quantization config was used during training:
|
26 |
-
- load_in_8bit: True
|
27 |
-
- llm_int8_threshold: 6.0
|
28 |
-
- llm_int8_skip_modules: None
|
29 |
-
- llm_int8_enable_fp32_cpu_offload: False
|
30 |
### Framework versions
|
31 |
|
32 |
- PEFT 0.5.0.dev0
|
33 |
-
- PEFT 0.5.0.dev0
|
34 |
-
- PEFT 0.5.0.dev0
|
35 |
-
|
36 |
-
- PEFT 0.5.0.dev0
|
|
|
1 |
+
# This model is finetuned by the joint efforts of Sparticle Inc. and A. I. Hakusan Inc.
|
2 |
+
## This model is a fine-tuned Llama2-7b model with Japanese dataset with LoRA.
|
3 |
+
The training set of this model is about 5% of randomly chosen data from llm-japanese-dataset by izumi-lab.
|
4 |
+
For inference, please follow the instructions in https://github.com/tloen/alpaca-lora/ .
|
5 |
## Training procedure
|
6 |
|
7 |
|
|
|
11 |
- llm_int8_skip_modules: None
|
12 |
- llm_int8_enable_fp32_cpu_offload: False
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
### Framework versions
|
15 |
|
16 |
- PEFT 0.5.0.dev0
|
|
|
|
|
|
|
|