--- language: - ja - en license: llama3.1 tags: - japanese - llama - llama-3 - mlx pipeline_tag: text-generation inference: false --- # mlx-community/Llama-3.1-70B-Japanese-Instruct-2407-4bit The Model [mlx-community/Llama-3.1-70B-Japanese-Instruct-2407-4bit](https://huggingface.co/mlx-community/Llama-3.1-70B-Japanese-Instruct-2407-4bit) was converted to MLX format from [cyberagent/Llama-3.1-70B-Japanese-Instruct-2407](https://huggingface.co/cyberagent/Llama-3.1-70B-Japanese-Instruct-2407) using mlx-lm version **0.16.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Llama-3.1-70B-Japanese-Instruct-2407-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```