calm2-7b / README.md
rishigami's picture
Update README.md
37f2cd9
|
raw
history blame
1.92 kB
metadata
license: apache-2.0
language:
  - ja
  - en
tags:
  - japanese
  - causal-lm
inference: false

CyberAgentLM2-7B

Model Description

CyberAgentLM2 is a decoder-only language model pre-trained on the 1.3T tokens of publicly available Japanese and English datasets.

Variant: CyberAgentLM2-Chat

Requirements

  • transformers >= 4.34.1
  • accelerate

Usage

import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer

assert transformers.__version__ >= "4.34.1"

model = AutoModelForCausalLM.from_pretrained("cyberagent/calm2-7b", device_map="auto", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("cyberagent/calm2-7b")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

prompt = "AIによって私達の暮らしは、"

token_ids = tokenizer.encode(prompt, return_tensors="pt")
output_ids = model.generate(
    input_ids=token_ids.to(model.device),
    max_new_tokens=100,
    do_sample=True,
    temperature=0.9,
    streamer=streamer,
)

Model Details

  • Model size: 7B
  • Trained tokens: 1.3T tokens
  • Context length: 4096
  • Model type: Transformer-based Language Model
  • Language(s): Japanese, English
  • Developed by: CyberAgent, Inc.
  • License: Apache-2.0

Author

Ryosuke Ishigami

Citations

@article{touvron2023llama,
  title={LLaMA: Open and Efficient Foundation Language Models},
  author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
  journal={arXiv preprint arXiv:2302.13971},
  year={2023}
}