File size: 2,292 Bytes
66c621c
 
ebed11f
 
 
 
 
 
 
ae2460e
 
66c621c
 
 
 
 
 
 
 
 
 
 
ae2460e
 
66c621c
ebed11f
 
 
66c621c
 
ebed11f
66c621c
ebed11f
 
66c621c
ebed11f
ae2460e
 
ebed11f
 
66c621c
ae2460e
 
 
 
 
ebed11f
66c621c
ae2460e
 
 
 
 
 
 
 
 
 
 
 
 
ebed11f
66c621c
 
 
 
 
ebed11f
 
 
 
 
 
 
 
66c621c
 
 
 
ebed11f
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
library_name: transformers
tags:
- llama
- trl
datasets:
- elyza/ELYZA-tasks-100
language:
- ja
base_model:
- llm-jp/llm-jp-3-13b
---

# Model Card for Model ID




## Model Details

### Model Description

東大松尾研LLM講座2024の最終課題向けのelyza-tasks-100-TV_0.jsonlの出力用にFinetuningしたモデルです。
モデルの利用については、提供いただいたOmmniCampusの環境およびサンプルコードに沿ったものとなっております。

- **Developed by:** maktag
- **Language(s) (NLP):** Japanese
- **Finetuned from model [optional]:** llm-jp/llm-jp-3-13b


## How to Get Started with the Model

```
from transformers import AutoTokenizer, AutoModelForCausalLM

# Load the fine-tuned model and tokenizer
base_model_id = "llm-jp/llm-jp-3-13b" 
adapter_id = "maktag/llm-jp-3-13b-finetune8"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# QLoRA config
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16,
)

# Load model
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    quantization_config=bnb_config,
    device_map="auto",
    token = HF_TOKEN
)

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, token = HF_TOKEN)

# 元のモデルにLoRAのアダプタを統合。
model = PeftModel.from_pretrained(model, adapter_id, token = HF_TOKEN)
```

[More Information Needed]

## Training Details

- Fine-Tuning Framework: LoRA-based PEFT (Parameter-Efficient Fine-Tuning).
- Dataset: Proprietary Japanese instruction-following dataset.
- Sequence Length: 512 tokens.
- Hyperparameters:
  - Batch size: 32
  - Learning rate: 1e-5
  - Epochs: 3

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

- [elyza/ELYZA-tasks-100](https://huggingface.co/datasets/elyza/ELYZA-tasks-100)
- [Ichikara Instruction](https://liat-aip.sakura.ne.jp/wp/llmのための日本語インストラクションデータ作成/llmのための日本語インストラクションデータ-公開/)