sarashina1-13b / README.md
sho-takase's picture
Update README.md
a25075a verified
|
raw
history blame
2.67 kB
metadata
license: mit
language:
  - ja

Sarashina1-13B

This repository provides Japanese language models trained by SB Intuitions.

How to use

Please set use_fast=False to use our tokenizer properly.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, set_seed
 
model = AutoModelForCausalLM.from_pretrained("sbintuitions/sarashina1-13b", torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("sbintuitions/sarashina1-13b", use_fast=False)
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
set_seed(123)
 
text = generator(
    "おはようございます、今日の天気は",
    max_length=30,
    do_sample=True,
    pad_token_id=tokenizer.pad_token_id,
    num_return_sequences=3,
)
 
for t in text:
  print(t)
 

Configuration

Parameters Vocab size Training tokens Architecture Position type Layers Hidden dim Attention heads
7B 51200 1.0T GPTNeoX RoPE 32 4096 32
13B 51200 1.0T GPTNeoX RoPE 40 5120 40
65B 51200 800B GPTNeoX RoPE 80 8192 64

Training Corpus

We used a Japanese portion of the Common Crawl corpus, which is the largest Web corpus, as our training dataset. To clean the training corpus, we used CCNet and HojiChar. After cleaning, our corpus contains about 550B tokens.

Tokenization

We use a sentencepiece tokenizer with a unigram language model and byte-fallback. We do not apply pre-tokenization with Japanese tokenizer. Thus, a user may directly feed raw sentences into the tokenizer.

Ethical Considerations and Limitations

Sarashina1 has not been tuned to follow an instruction yet. Therefore, sarashina1 might generate some meaningless sequences, some inaccurate instances or biased/objectionable outputs. Before using sarashina1, we would like developers to tune models based on human preferences and safety considerations.

License

MIT License