Text Generation
Transformers
Japanese
English
llama
gptq
Edit model card

rinna/youri-7b-instruction-gptq

rinna-icon

Overview

rinna/youri-7b-instruction-gptq is the quantized model for rinna/youri-7b-instruction using AutoGPTQ. The quantized version is 4x smaller than the original model and thus requires less memory and provides faster inference.


Benchmarking

Please refer to rinna's LM benchmark page.

How to use the model

import torch
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM

tokenizer = AutoTokenizer.from_pretrained("rinna/youri-7b-instruction-gptq")
model = AutoGPTQForCausalLM.from_quantized("rinna/youri-7b-instruction-gptq", use_safetensors=True)

instruction = "次の日本語を英語に翻訳してください。"
input = "大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。"
prompt = f"""
以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。

### 指示:
{instruction}

### 入力:
{input}

### 応答:
"""
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")

with torch.no_grad():
    output_ids = model.generate(
        input_ids=token_ids.to(model.device),
        max_new_tokens=200,
        do_sample=True,
        temperature=0.5,
        pad_token_id=tokenizer.pad_token_id,
        bos_token_id=tokenizer.bos_token_id,
        eos_token_id=tokenizer.eos_token_id
    )

output = tokenizer.decode(output_ids.tolist()[0])
print(output)

Tokenization

The model uses the original llama-2 tokenizer.


How to cite

@misc{rinna-youri-7b-instruction-gptq,
    title = {rinna/youri-7b-instruction-gptq},
    author = {Wakatsuki, Toshiaki and Zhao, Tianyu and Sawada, Kei},
    url = {https://huggingface.co/rinna/youri-7b-instruction-gptq}
}

@inproceedings{sawada2024release,
    title = {Release of Pre-Trained Models for the {J}apanese Language},
    author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
    booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
    month = {5},
    year = {2024},
    pages = {13898--13905},
    url = {https://aclanthology.org/2024.lrec-main.1213},
    note = {\url{https://arxiv.org/abs/2404.01657}}
}

License

The llama2 license

Downloads last month
31
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for rinna/youri-7b-instruction-gptq

Finetuned
rinna/youri-7b
Quantized
(5)
this model

Datasets used to train rinna/youri-7b-instruction-gptq

Collection including rinna/youri-7b-instruction-gptq