fujiki commited on
Commit
f9dd644
1 Parent(s): 1393148

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -8,11 +8,11 @@ pipeline_tag: text-generation
8
  license: apache-2.0
9
  ---
10
 
11
- # Japanese StableLM Instruct Gamma
12
 
13
  ## Model Description
14
 
15
- This is a 7B-parameter decoder-only Japanese language model fine-tuned on instruction-following datasets, built on top of the base model [Japanese StableLM Base Gamma](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma).
16
 
17
  ## Usage
18
 
@@ -22,9 +22,9 @@ This is a 7B-parameter decoder-only Japanese language model fine-tuned on instru
22
  import torch
23
  from transformers import LlamaTokenizer, AutoModelForCausalLM
24
 
25
- tokenizer = AutoTokenizer.from_pretrained("stabilityai/japanese-stablelm-instruct-gamma")
26
  model = AutoModelForCausalLM.from_pretrained(
27
- "stabilityai/japanese-stablelm-instruct-gamma",
28
  trust_remote_code=True,
29
  torch_dtype="auto",
30
  )
@@ -73,7 +73,7 @@ print(out)
73
  ## Model Details
74
 
75
  * **Developed by**: [Stability AI](https://stability.ai/)
76
- * **Model type**: `Japanese StableLM Instruct Gamma` model is an auto-regressive language model based on the transformer decoder architecture.
77
  * **Language(s)**: Japanese
78
  * **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
79
  * **Contact**: For questions and comments about the model, please email `lm@stability.ai`
 
8
  license: apache-2.0
9
  ---
10
 
11
+ # Japanese Stable LM Instruct Gamma 7B
12
 
13
  ## Model Description
14
 
15
+ This is a 7B-parameter decoder-only Japanese language model fine-tuned on instruction-following datasets, built on top of the base model [Japanese Stable LM Base Gamma 7B](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b).
16
 
17
  ## Usage
18
 
 
22
  import torch
23
  from transformers import LlamaTokenizer, AutoModelForCausalLM
24
 
25
+ tokenizer = AutoTokenizer.from_pretrained("stabilityai/japanese-stablelm-instruct-gamma-7b")
26
  model = AutoModelForCausalLM.from_pretrained(
27
+ "stabilityai/japanese-stablelm-instruct-gamma-7b",
28
  trust_remote_code=True,
29
  torch_dtype="auto",
30
  )
 
73
  ## Model Details
74
 
75
  * **Developed by**: [Stability AI](https://stability.ai/)
76
+ * **Model type**: `Japanese Stable LM Instruct Gamma 7B` model is an auto-regressive language model based on the transformer decoder architecture.
77
  * **Language(s)**: Japanese
78
  * **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
79
  * **Contact**: For questions and comments about the model, please email `lm@stability.ai`