|
This model was exported using [GPTQModel](https://github.com/ModelCloud/GPTQModel). |
|
|
|
## How To Use |
|
|
|
### Use the model |
|
|
|
```shell |
|
pip install mlx_lm |
|
``` |
|
|
|
Install mlx_lm first |
|
|
|
```python |
|
from mlx_lm import load, generate |
|
|
|
mlx_path = "ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-mlx-v1" |
|
mlx_model, tokenizer = load(mlx_path) |
|
prompt = "The capital of France is" |
|
|
|
messages = [{"role": "user", "content": prompt}] |
|
prompt = tokenizer.apply_chat_template( |
|
messages, add_generation_prompt=True |
|
) |
|
|
|
text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True) |
|
``` |
|
|
|
### Export gptq to mlx |
|
```shell |
|
pip install gptqmodel |
|
``` |
|
|
|
Install gptqmodel first |
|
|
|
```python |
|
from gptqmodel import GPTQModel |
|
|
|
# load gptq quantized model |
|
gptq_model_path = "ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-v1" |
|
mlx_path = f"./vortex/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-v1-mlx" |
|
|
|
# export to mlx model |
|
GPTQModel.export(gptq_model_path, mlx_path, "mlx") |
|
``` |