cicdatopea commited on
Commit
3d49c61
1 Parent(s): 2d6349c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -5,7 +5,7 @@ datasets:
5
 
6
  ## Model Details
7
 
8
- This model is an int4 model with group_size 128 and symmetric quantization of [tiiuae/Falcon3-3B-Instruct](https://huggingface.co/tiiuae/Falcon3-3B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with revision `1839199` to use AutoGPTQ format
9
 
10
  ## How To Use
11
  ### INT4 Inference(CPU/HPU/CUDA)
@@ -93,7 +93,7 @@ auto-round --model "OPEA/falcon3-3B-int4-sym-inc" --eval --eval_bs 16 --tasks l
93
  Here is the sample command to generate the model.
94
  ```bash
95
  auto-round \
96
- --model tiiuae/Falcon3-3B-Instruct \
97
  --device 0 \
98
  --group_size 128 \
99
  --nsamples 512 \
 
5
 
6
  ## Model Details
7
 
8
+ This model is an int4 model with group_size 128 and symmetric quantization of [tiiuae/Falcon3-3B-Base](https://huggingface.co/tiiuae/Falcon3-3B-Base) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with revision `1839199` to use AutoGPTQ format
9
 
10
  ## How To Use
11
  ### INT4 Inference(CPU/HPU/CUDA)
 
93
  Here is the sample command to generate the model.
94
  ```bash
95
  auto-round \
96
+ --model tiiuae/Falcon3-3B-Base \
97
  --device 0 \
98
  --group_size 128 \
99
  --nsamples 512 \