This model was exported using GPTQModel. Below is example code for exporting a model from GPTQ format to MLX format.
Example:
from gptqmodel import GPTQModel
# load gptq quantized model
gptq_model_path = "ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-v1"
mlx_path = f"./vortex/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-mlx-v1"
# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")
# load mlx model check if it works
from mlx_lm import load, generate
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)