wenhuach commited on
Commit
8b4ea81
1 Parent(s): 274bed5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -5
README.md CHANGED
@@ -8,19 +8,18 @@ This is a recipe of int4 model with group_size 128 for [meta-llama/Meta-Llama-3.
8
 
9
  ## Reproduce the model
10
 
11
- Here is the sample command to reproduce the model
12
 
13
  ```bash
14
- git clone https://github.com/intel/auto-round
15
- cd auto-round/examples/language-modeling
16
- pip install -r requirements.txt
17
- python3 main.py \
18
  --model_name meta-llama/Meta-Llama-3.1-8B-Instruct \
19
  --device 0 \
20
  --group_size 128 \
21
  --bits 4 \
22
  --nsamples 512 \
23
  --iters 1000 \
 
24
  --model_dtype "fp16" \
25
  --deployment_device 'auto_round' \
26
  --eval_bs 16 \
 
8
 
9
  ## Reproduce the model
10
 
11
+ This is an outdated recipe. We recommend using symmetric quantization by removing '--asym'
12
 
13
  ```bash
14
+
15
+ auto-round \
 
 
16
  --model_name meta-llama/Meta-Llama-3.1-8B-Instruct \
17
  --device 0 \
18
  --group_size 128 \
19
  --bits 4 \
20
  --nsamples 512 \
21
  --iters 1000 \
22
+ --asym \
23
  --model_dtype "fp16" \
24
  --deployment_device 'auto_round' \
25
  --eval_bs 16 \