|
--- |
|
library_name: transformers |
|
datasets: |
|
- kinokokoro/ichikara-instruction-003 |
|
base_model: |
|
- llm-jp/llm-jp-3-13b |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
The models have been fine-tuned on the following datasets. |
|
|
|
|
|
| Language | Dataset | description | |
|
|:---|:---|:---| |
|
|Japanese|[ichikara-instruction-003-001-1.json](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF-%E5%85%AC%E9%96%8B/)| A manually constructed instruction dataset | |
|
|
|
データセット作成チーム: |
|
関根聡, 安藤まや, 後藤美知子, 鈴木久美, 河原大輔, 井之上直也, 乾健太郎. ichikara-instruction: LLMのための日本語インストラクションデータの構築. 言語処理学会第30回年次大会(2024) |
|
|
|
|
|
### Usage |
|
|
|
```terminal |
|
pip install -U bitsandbytes |
|
pip install -U transformers |
|
pip install -U accelerate |
|
pip install -U datasets |
|
pip install -U peft |
|
``` |
|
|
|
```python |
|
from transformers import ( |
|
AutoModelForCausalLM, |
|
AutoTokenizer, |
|
BitsAndBytesConfig, |
|
) |
|
from peft import PeftModel |
|
import torch |
|
from tqdm import tqdm |
|
import json |
|
|
|
HF_TOKEN = "your_hf_token" |
|
model_id = "llm-jp/llm-jp-3-13b" |
|
adapter_id = "hachimada/llm-jp-3-13b-finetune-v0" |
|
``` |
|
|
|
```python |
|
# QLoRA config |
|
bnb_config = BitsAndBytesConfig( |
|
load_in_4bit=True, |
|
bnb_4bit_quant_type="nf4", |
|
bnb_4bit_compute_dtype=torch.bfloat16, |
|
) |
|
# Load model |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_id, |
|
quantization_config=bnb_config, |
|
device_map="auto", |
|
token = HF_TOKEN |
|
) |
|
# Load tokenizer |
|
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, token = HF_TOKEN) |
|
# Apply adapter |
|
model = PeftModel.from_pretrained(model, adapter_id, token = HF_TOKEN) |
|
``` |
|
|
|
|
|
```python |
|
# Load your dataset |
|
datasets = [] |
|
with open("./your_dataset.jsonl", "r") as f: |
|
item = "" |
|
for line in f: |
|
line = line.strip() |
|
item += line |
|
if item.endswith("}"): |
|
datasets.append(json.loads(item)) |
|
item = "" |
|
``` |
|
|
|
```python |
|
# Inference |
|
results = [] |
|
for data in tqdm(datasets): |
|
|
|
input = data["input"] |
|
|
|
prompt = f"""### 指示 |
|
{input} |
|
### 回答 |
|
""" |
|
|
|
tokenized_input = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt").to(model.device) |
|
attention_mask = torch.ones_like(tokenized_input) |
|
with torch.no_grad(): |
|
outputs = model.generate( |
|
tokenized_input, |
|
attention_mask=attention_mask, |
|
max_new_tokens=1024, |
|
do_sample=False, |
|
repetition_penalty=1.2, |
|
pad_token_id=tokenizer.eos_token_id |
|
)[0] |
|
output = tokenizer.decode(outputs[tokenized_input.size(1):], skip_special_tokens=True) |
|
|
|
results.append({"task_id": data["task_id"], "input": input, "output": output}) |
|
|
|
# dump to jsonl |
|
import re |
|
jsonl_id = re.sub(".*/", "", adapter_id) |
|
with open(f"./{jsonl_id}-outputs.jsonl", 'w', encoding='utf-8') as f: |
|
for result in results: |
|
json.dump(result, f, ensure_ascii=False) # ensure_ascii=False for handling non-ASCII characters |
|
f.write('\n') |
|
``` |