luow-amd commited on
Commit
d488ea1
·
verified ·
1 Parent(s): bd24176

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -23,6 +23,9 @@ python3 quantize_quark.py \
23
  --quant_scheme w_fp8_a_fp8 \
24
  --kv_cache_dtype fp8 \
25
  --num_calib_data 128 \
 
 
 
26
  # If model size is too large for single GPU, please use multi GPU instead.
27
  python3 quantize_quark.py \
28
  --model_dir $MODEL_DIR \
@@ -30,6 +33,9 @@ python3 quantize_quark.py \
30
  --quant_scheme w_fp8_a_fp8 \
31
  --kv_cache_dtype fp8 \
32
  --num_calib_data 128 \
 
 
 
33
  ```
34
  ## Deployment
35
  Quark has its own export format and allows FP8 quantized models to be efficiently deployed using the vLLM backend(vLLM-compatible).
 
23
  --quant_scheme w_fp8_a_fp8 \
24
  --kv_cache_dtype fp8 \
25
  --num_calib_data 128 \
26
+ --model_export quark_safetensors \
27
+ --no_weight_matrix_merge \
28
+
29
  # If model size is too large for single GPU, please use multi GPU instead.
30
  python3 quantize_quark.py \
31
  --model_dir $MODEL_DIR \
 
33
  --quant_scheme w_fp8_a_fp8 \
34
  --kv_cache_dtype fp8 \
35
  --num_calib_data 128 \
36
+ --model_export quark_safetensors \
37
+ --no_weight_matrix_merge \
38
+ --multi_gpu
39
  ```
40
  ## Deployment
41
  Quark has its own export format and allows FP8 quantized models to be efficiently deployed using the vLLM backend(vLLM-compatible).