File size: 923 Bytes
5fe35db d9f840c cc43067 d9f840c fe5b12d cc43067 c35fb66 fe5b12d c35fb66 77d0a59 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
---
license: cc-by-sa-4.0
datasets:
- izumi-lab/llm-japanese-dataset
language:
- ja
- en
---
## This model is a fine-tuned Llama2-13b-chat-hf model with Japanese dataset with LoRA.
# This model is finetuned by the joint efforts of Sparticle Inc. and A. I. Hakusan Inc.
The training set of this model contains:
5% of randomly chosen data from llm-japanese-dataset by izumi-lab.
Japanese-alpaca-lora dataset, retrieved from https://github.com/masa3141/japanese-alpaca-lora/tree/main
For inference, please follow the instructions in https://github.com/tloen/alpaca-lora/ .
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.5.0.dev0
You must agree with Meta's agreements when using this LoRA adapter with Llama-2. |