Files changed (1) hide show
  1. README.md +28 -10
README.md CHANGED
@@ -1,19 +1,19 @@
1
- This model was exported using [GPTQModel](https://github.com/ModelCloud/GPTQModel). Below is example code for exporting a model from GPTQ format to MLX format.
2
 
3
- ## Example:
4
- ```python
5
- from gptqmodel import GPTQModel
6
 
7
- # load gptq quantized model
8
- gptq_model_path = "ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-v1"
9
- mlx_path = f"./vortex/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-v1-mlx"
10
 
11
- # export to mlx model
12
- GPTQModel.export(gptq_model_path, mlx_path, "mlx")
 
 
 
13
 
14
- # load mlx model check if it works
15
  from mlx_lm import load, generate
16
 
 
17
  mlx_model, tokenizer = load(mlx_path)
18
  prompt = "The capital of France is"
19
 
@@ -23,4 +23,22 @@ prompt = tokenizer.apply_chat_template(
23
  )
24
 
25
  text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  ```
 
1
+ This model was exported using [GPTQModel](https://github.com/ModelCloud/GPTQModel).
2
 
3
+ ## How To Use
 
 
4
 
5
+ ### Use the model
 
 
6
 
7
+ ```shell
8
+ pip install mlx_lm
9
+ ```
10
+
11
+ Install mlx_lm first
12
 
13
+ ```python
14
  from mlx_lm import load, generate
15
 
16
+ mlx_path = "ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-mlx-v1"
17
  mlx_model, tokenizer = load(mlx_path)
18
  prompt = "The capital of France is"
19
 
 
23
  )
24
 
25
  text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
26
+ ```
27
+
28
+ ### Export gptq to mlx
29
+ ```shell
30
+ pip install gptqmodel
31
+ ```
32
+
33
+ Install gptqmodel first
34
+
35
+ ```python
36
+ from gptqmodel import GPTQModel
37
+
38
+ # load gptq quantized model
39
+ gptq_model_path = "ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-v1"
40
+ mlx_path = f"./vortex/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-v1-mlx"
41
+
42
+ # export to mlx model
43
+ GPTQModel.export(gptq_model_path, mlx_path, "mlx")
44
  ```