Uploaded model

  • Developed by: sekigh
  • License: apache-2.0
  • Finetuned from model : llm-jp/llm-jp-3-13b

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Usage

必要なライブラリを読み込み

from unsloth import FastLanguageModel

from peft import PeftModel

import torch

import json

from tqdm import tqdm

import re

ベースとなるモデルと学習したLoRAのアダプタ(Hugging FaceのIDを指定)。

model_id = "llm-jp/llm-jp-3-13b"

adapter_id = "sekigh/llm-jp-3-13b-it-H-E-6-local_lora"

unslothのFastLanguageModelで元のモデルをロード。

dtype = None # Noneにしておけば自動で設定

load_in_4bit = True # 今回は13Bモデルを扱うためTrue

model, tokenizer = FastLanguageModel.from_pretrained(

model_name=model_id,
dtype=dtype,
load_in_4bit=load_in_4bit,
trust_remote_code=True,

)

元のモデルにLoRAのアダプタを統合。

model = PeftModel.from_pretrained(model, adapter_id, token = HF_TOKEN)

タスクとなるデータの読み込み。

事前にデータをアップロードしてください。

datasets = []

with open("./elyza-tasks-100-TV_0.jsonl", "r") as f:

item = ""
for line in f:
  line = line.strip()
  item += line
  if item.endswith("}"):
    datasets.append(json.loads(item))
    item = ""

モデルを用いてタスクの推論。

推論するためにモデルのモードを変更

FastLanguageModel.for_inference(model)

results = []

for dt in tqdm(datasets):

input = dt["input"]

prompt = f"""### 指示\n{input}\n### 回答\n"""

inputs = tokenizer([prompt], return_tensors = "pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens = 512, use_cache = True, do_sample=False, repetition_penalty=1.2)

prediction = tokenizer.decode(outputs[0], skip_special_tokens=True).split('\n### 回答')[-1]

results.append({"task_id": dt["task_id"], "input": input, "output": prediction})

結果をjsonlで保存。

ここではadapter_idを元にファイル名を決定しているが、ファイル名は任意で問題なし。

json_file_id = re.sub("[a-z]/", "", adapter_id)

with open(f"./llm-jp-3-13b-H-E-6-local_inf_output.jsonl", 'w', encoding='utf-8') as f:

for result in results:

    json.dump(result, f, ensure_ascii=False)
    f.write('\n')
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for sekigh/llm-jp-3-13b-it-H-E-6-local_lora

Finetuned
(1120)
this model