cicdatopea commited on
Commit
2e2f97d
·
verified ·
1 Parent(s): 3039495

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -1,11 +1,13 @@
1
  ---
2
  datasets:
3
  - NeelNanda/pile-10k
 
 
4
  ---
5
 
6
  ## Model Details
7
 
8
- This model is an int4 model with group_size 128 and symmetric quantization of [Falcon3-7B-Base](https://huggingface.co/tiiuae/Falcon3-7B-Base) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with revision `a10e358` to use AutoGPTQ format, with revision `e9aa317` to use AutoAWQ format
9
 
10
  ## How To Use
11
  ### INT4 Inference(CPU/HPU/CUDA)
 
1
  ---
2
  datasets:
3
  - NeelNanda/pile-10k
4
+ base_model:
5
+ - tiiuae/Falcon3-7B-Base
6
  ---
7
 
8
  ## Model Details
9
 
10
+ This model is an int4 model with group_size 128 and symmetric quantization of [tiiuae/Falcon3-7B-Base](https://huggingface.co/tiiuae/Falcon3-7B-Base) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with revision `a10e358` to use AutoGPTQ format, with revision `e9aa317` to use AutoAWQ format
11
 
12
  ## How To Use
13
  ### INT4 Inference(CPU/HPU/CUDA)