atsuki-yamaguchi's picture
Upload README.md with huggingface_hub
a68a7c7 verified
metadata
license: mit
language: ja

Mistral-7B Japanese [LAPT + CLP+]

How to use

from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer

model = AutoPeftModelForCausalLM.from_pretrained(
    "atsuki-yamaguchi/Mistral-7B-v0.1-clpp-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
    "atsuki-yamaguchi/Mistral-7B-v0.1-clpp-ja"
)

# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
    "atsuki-yamaguchi/Mistral-7B-v0.1-clpp-ja",
    device_map="auto", 
    load_in_8bit=True,
)

Citation

@article{yamaguchi2024empirical,
  title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference}, 
  author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
  journal={ArXiv},
  year={2024},
  volume={abs/2402.10712},
  url={https://arxiv.org/abs/2402.10712}
}

Link

For more details, please visit https://github.com/gucci-j/llm-cva