togepi55 commited on
Commit
03db82b
1 Parent(s): f4f24e9

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -56
README.md CHANGED
@@ -9,46 +9,22 @@ license: apache-2.0
9
  ---
10
 
11
  # Model Card for Model ID
12
-
13
- <!-- Provide a quick summary of what the model is/does. -->
14
-
15
-
16
-
17
- ## Model Details
18
-
19
- ### Model Description
20
-
21
- <!-- Provide a longer summary of what this model is. -->
22
-
23
-
24
-
25
  - **Developed by:** togepi55
26
  - **Funded by :** llm-jp/llm-jp-3-13b
27
- - **Language(s) (NLP):** en, ja
28
  - **License:** apache-2.0
29
- - **Finetuned from model [optional]:** [More Information Needed]
30
-
31
- ### Model Sources [optional]
32
-
33
- <!-- Provide the basic links for the model. -->
34
-
35
- - **Repository:** [More Information Needed]
36
- - **Paper [optional]:** [More Information Needed]
37
- - **Demo [optional]:** [More Information Needed]
38
-
39
- ## Uses
40
-
41
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
42
 
43
  ### 注意
44
  プロンプトは形式でのみ学習しています。
45
  ~~~
46
- "<s>以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい
 
47
 
48
  ### 指示:
49
  {instruction}
50
 
51
- ### 応答:"
 
52
  ~~~
53
 
54
  ### サンプルコード
@@ -77,7 +53,6 @@ model = AutoModelForCausalLM.from_pretrained(
77
  BASE_MODEL,
78
  device_map="auto",
79
  quantization_config=bnb_config,
80
- #torch_dtype=torch.bfloat16,
81
  torch_dtype="auto",
82
  trust_remote_code=True,
83
  )
@@ -102,7 +77,6 @@ with torch.no_grad():
102
  pad_token_id=tokenizer.pad_token_id,
103
  eos_token_id=tokenizer.eos_token_id,
104
  do_sample=False,
105
- #num_return_sequences=3,
106
  streamer=streamer,
107
  repetition_penalty=1.02,
108
  )
@@ -114,34 +88,10 @@ with torch.no_grad():
114
 
115
 
116
 
117
- ### Direct Use
118
-
119
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
120
-
121
- [More Information Needed]
122
-
123
- ### Downstream Use [optional]
124
-
125
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
126
-
127
- [More Information Needed]
128
-
129
- ### Out-of-Scope Use
130
-
131
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
132
-
133
- [More Information Needed]
134
-
135
  ## Bias, Risks, and Limitations
136
-
137
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
138
-
139
  RLHF,DPOを実施していないため不適切な表現が出力される可能性があります。
140
 
141
- ## Training Details
142
-
143
- ### Training Data
144
-
145
  指示チューニングデータとして下記のものを利用しました。
146
  * ichikara-instruction-003-001-1.json
147
  * ichikara-instruction-003-002-1.json
 
9
  ---
10
 
11
  # Model Card for Model ID
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  - **Developed by:** togepi55
13
  - **Funded by :** llm-jp/llm-jp-3-13b
14
+ - **Language(s) (NLP):** English, Japanese
15
  - **License:** apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
  ### 注意
18
  プロンプトは形式でのみ学習しています。
19
  ~~~
20
+ """
21
+ <s>以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい
22
 
23
  ### 指示:
24
  {instruction}
25
 
26
+ ### 応答:
27
+ """
28
  ~~~
29
 
30
  ### サンプルコード
 
53
  BASE_MODEL,
54
  device_map="auto",
55
  quantization_config=bnb_config,
 
56
  torch_dtype="auto",
57
  trust_remote_code=True,
58
  )
 
77
  pad_token_id=tokenizer.pad_token_id,
78
  eos_token_id=tokenizer.eos_token_id,
79
  do_sample=False,
 
80
  streamer=streamer,
81
  repetition_penalty=1.02,
82
  )
 
88
 
89
 
90
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91
  ## Bias, Risks, and Limitations
 
 
 
92
  RLHF,DPOを実施していないため不適切な表現が出力される可能性があります。
93
 
94
+ ### Training Details
 
 
 
95
  指示チューニングデータとして下記のものを利用しました。
96
  * ichikara-instruction-003-001-1.json
97
  * ichikara-instruction-003-002-1.json