KKFurudate
commited on
Commit
•
d5ea9cb
1
Parent(s):
d1111ad
Update README.md
Browse files
README.md
CHANGED
@@ -19,8 +19,8 @@ datasets:
|
|
19 |
- **License:** cc-by-nc-sa-4.0
|
20 |
- **Finetuned from model :** llm-jp/llm-jp-3-13b
|
21 |
|
22 |
-
|
23 |
-
`llm-jp/llm-jp-3-13b`をベースモデルとして、ichikara-instructionデータとELYZA-tasks-100
|
24 |
|
25 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
26 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
@@ -59,7 +59,7 @@ HF_TOKEN = "YOUR-HF-TOKEN"
|
|
59 |
```
|
60 |
|
61 |
```python
|
62 |
-
#
|
63 |
model_id = "llm-jp/llm-jp-3-13b"
|
64 |
adapter_id = "KKFurudate/llm-jp-3-13b-v6_lora"
|
65 |
```
|
@@ -79,7 +79,7 @@ model, tokenizer = FastLanguageModel.from_pretrained(
|
|
79 |
|
80 |
|
81 |
```python
|
82 |
-
#
|
83 |
model = PeftModel.from_pretrained(model, adapter_id, token = HF_TOKEN)
|
84 |
```
|
85 |
|
@@ -97,7 +97,7 @@ with open("./YOUR-DATA.jsonl", "r") as f:
|
|
97 |
|
98 |
|
99 |
```python
|
100 |
-
#
|
101 |
FastLanguageModel.for_inference(model)
|
102 |
|
103 |
results = []
|
|
|
19 |
- **License:** cc-by-nc-sa-4.0
|
20 |
- **Finetuned from model :** llm-jp/llm-jp-3-13b
|
21 |
|
22 |
+
本モデルは日本語タスクに特化し、指示に基づく応答を行うためにファインチューニングされたLLM用のLoRAアダプタです。
|
23 |
+
`llm-jp/llm-jp-3-13b`をベースモデルとして、ichikara-instructionデータとELYZA-tasks-100を用いてファインチューニングを行っています。
|
24 |
|
25 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
26 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
59 |
```
|
60 |
|
61 |
```python
|
62 |
+
# ベースモデルと学習済みLoRAアダプタ(Hugging FaceのIDを指定)。
|
63 |
model_id = "llm-jp/llm-jp-3-13b"
|
64 |
adapter_id = "KKFurudate/llm-jp-3-13b-v6_lora"
|
65 |
```
|
|
|
79 |
|
80 |
|
81 |
```python
|
82 |
+
# ベースモデルに学習済みLoRAのアダプタを統合。
|
83 |
model = PeftModel.from_pretrained(model, adapter_id, token = HF_TOKEN)
|
84 |
```
|
85 |
|
|
|
97 |
|
98 |
|
99 |
```python
|
100 |
+
# 統合モデルを用いてタスクの推論。
|
101 |
FastLanguageModel.for_inference(model)
|
102 |
|
103 |
results = []
|