Update README.md
Browse files
README.md
CHANGED
@@ -6,17 +6,118 @@ tags:
|
|
6 |
- unsloth
|
7 |
- llama
|
8 |
- trl
|
9 |
-
license:
|
10 |
language:
|
11 |
- en
|
|
|
|
|
|
|
12 |
---
|
13 |
|
14 |
# Uploaded model
|
15 |
|
16 |
- **Developed by:** nnishi
|
17 |
-
- **License:**
|
18 |
- **Finetuned from model :** llm-jp/llm-jp-3-13b
|
19 |
|
20 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
21 |
|
22 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- unsloth
|
7 |
- llama
|
8 |
- trl
|
9 |
+
license: cc
|
10 |
language:
|
11 |
- en
|
12 |
+
datasets:
|
13 |
+
- kajuma/CC-news-2024-July-October-cleaned
|
14 |
+
- weblab-GENIAC/aya-ja-nemotron-dpo-masked
|
15 |
---
|
16 |
|
17 |
# Uploaded model
|
18 |
|
19 |
- **Developed by:** nnishi
|
20 |
+
- **License:** CC-BY-NC-SA
|
21 |
- **Finetuned from model :** llm-jp/llm-jp-3-13b
|
22 |
|
23 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
24 |
|
25 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
26 |
+
|
27 |
+
# HOW TO INFERENCE for competition evaluators
|
28 |
+
|
29 |
+
Run in google colab L4
|
30 |
+
```ipynb
|
31 |
+
!pip install unsloth
|
32 |
+
!pip uninstall unsloth -y && pip install --upgrade --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth.git
|
33 |
+
|
34 |
+
HF_TOKEN = # WRITE YOUR HF_TOKEN
|
35 |
+
ELYZA_TASKS_100_TV_JSONL_PATH = # WRITE
|
36 |
+
# Output for elyza-tasks-100-tv is saved as "output.jsonl"
|
37 |
+
|
38 |
+
|
39 |
+
from huggingface_hub import login
|
40 |
+
login(HF_TOKEN)
|
41 |
+
|
42 |
+
from unsloth import PatchDPOTrainer
|
43 |
+
PatchDPOTrainer()
|
44 |
+
|
45 |
+
from unsloth import FastLanguageModel
|
46 |
+
import torch
|
47 |
+
|
48 |
+
max_seq_length = 2048
|
49 |
+
dtype = torch.bfloat16
|
50 |
+
load_in_4bit = True
|
51 |
+
model, tokenizer = FastLanguageModel.from_pretrained(
|
52 |
+
model_name = "nnishi/llm-jp-3-13b_kajumanews0.3_ichikara_ayadpo100rows_lora",
|
53 |
+
max_seq_length = max_seq_length,
|
54 |
+
dtype = dtype,
|
55 |
+
load_in_4bit = load_in_4bit,
|
56 |
+
token = HF_TOKEN,
|
57 |
+
)
|
58 |
+
|
59 |
+
import json
|
60 |
+
datasets = []
|
61 |
+
with open(ELYZA_TASKS_100_TV_JSONL_PATH, "r") as f:
|
62 |
+
item = ""
|
63 |
+
for line in f:
|
64 |
+
line = line.strip()
|
65 |
+
item += line
|
66 |
+
if item.endswith("}"):
|
67 |
+
datasets.append(json.loads(item))
|
68 |
+
item = ""
|
69 |
+
|
70 |
+
from tqdm import tqdm
|
71 |
+
|
72 |
+
FastLanguageModel.for_inference(model)
|
73 |
+
|
74 |
+
results = []
|
75 |
+
for dt in tqdm(datasets):
|
76 |
+
input = dt["input"]
|
77 |
+
|
78 |
+
prompt = f"""### 指示\n{input}\n### 回答\n"""
|
79 |
+
|
80 |
+
inputs = tokenizer([prompt], return_tensors = "pt").to(model.device)
|
81 |
+
|
82 |
+
outputs = model.generate(**inputs, max_new_tokens = 2048, use_cache = True, do_sample=False, repetition_penalty=1.2)
|
83 |
+
prediction = tokenizer.decode(outputs[0], skip_special_tokens=True).split('\n### 回答')[-1]
|
84 |
+
|
85 |
+
results.append({"task_id": dt["task_id"], "input": input, "output": prediction})
|
86 |
+
|
87 |
+
with open("output.jsonl", "w") as f:
|
88 |
+
for r in results:
|
89 |
+
f.write(json.dumps(r, ensure_ascii=False) + "\n")
|
90 |
+
```
|
91 |
+
|
92 |
+
# Development steps
|
93 |
+
- quantize `llm-jp/llm-jp-3-13b`
|
94 |
+
- continued pre-training
|
95 |
+
- randomly chosen 5760 records in `kajuma/CC-news-2024-July-October-cleaned`
|
96 |
+
- instruction tuning
|
97 |
+
- all data in `ichikara-instruction`'s `ichikara-instruction-003-001-1.json`
|
98 |
+
- direct policy optimization
|
99 |
+
- randomly chosen 100 records in `weblab-GENIAC/aya-ja-nemotron-dpo-masked`
|
100 |
+
|
101 |
+
# Used datasets and their licenses
|
102 |
+
## kajuma/CC-news-2024-July-October-cleaned
|
103 |
+
- [creator](https://huggingface.co/kajuma)
|
104 |
+
- [repository](https://huggingface.co/datasets/kajuma/CC-news-2024-July-October-cleaned)
|
105 |
+
|
106 |
+
kazuma
|
107 |
+
kajuma/CC-news-2024-July-October-cleaned
|
108 |
+
ODC-BY
|
109 |
+
|
110 |
+
## ichikara-instruction: LLMのための日本語インストラクションデータ
|
111 |
+
- [homepage](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/)
|
112 |
+
|
113 |
+
理化学研究所 革新知能統合研究センター 言語情報アクセス技術チーム
|
114 |
+
ichikara-instruction: LLMのための日本語インストラクションデータ
|
115 |
+
CC-BY-NC-SA
|
116 |
+
|
117 |
+
## weblab-GENIAC/aya-ja-nemotron-dpo-masked
|
118 |
+
- [creator](https://huggingface.co/weblab-GENIAC)
|
119 |
+
- [repository](https://huggingface.co/datasets/weblab-GENIAC/aya-ja-nemotron-dpo-masked)
|
120 |
+
|
121 |
+
weblab-GENIAC
|
122 |
+
weblab-GENIAC/aya-ja-nemotron-dpo-masked
|
123 |
+
Apache License 2.0
|