bobtk's picture
46835833c5e028b67f0f64b51b8a83c13a61c23026f40ba99bc027c31e3b9ed3
ef3b64f verified
|
raw
history blame
1.29 kB
metadata
base_model: google/gemma-2-2b-jpn-it
language:
  - ja
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
  - conversational
  - mlx
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
  To access Gemma on Hugging Face, you’re required to review and agree to
  Google’s usage license. To do this, please ensure you’re logged in to Hugging
  Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license

mlx-community/gemma-2-2b-jpn-it-8bit

The Model mlx-community/gemma-2-2b-jpn-it-8bit was converted to MLX format from google/gemma-2-2b-jpn-it using mlx-lm version 0.19.0.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/gemma-2-2b-jpn-it-8bit")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)