File size: 4,438 Bytes
ad1feeb f2ce496 ad1feeb f2ce496 ad1feeb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 |
---
license: apache-2.0
tags:
- AWQ
inference: false
---
# Falcon-7B-Instruct (4-bit 64g AWQ Quantized)
[Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets.
This model is a 4-bit 64 group size AWQ quantized model. For more information about AWQ quantization, please click [here](https://github.com/mit-han-lab/llm-awq).
## Model Date
July 5, 2023
## Model License
Please refer to original Falcon model license ([link](https://huggingface.co/tiiuae/falcon-7b-instruct)).
Please refer to the AWQ quantization license ([link](https://github.com/llm-awq/blob/main/LICENSE)).
## CUDA Version
This model was successfully tested on CUDA driver v530.30.02 and runtime v11.7 with Python v3.10.11. Please note that AWQ requires NVIDIA GPUs with compute capability of 80 or higher.
## How to Use
```bash
git clone https://github.com/mit-han-lab/llm-awq \
&& cd llm-awq \
&& git checkout 71d8e68df78de6c0c817b029a568c064bf22132d \
&& pip install -e . \
&& cd awq/kernels \
&& python setup.py install
```
```python
import torch
from awq.quantize.quantizer import real_quantize_model_weight
from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer
from accelerate import init_empty_weights, load_checkpoint_and_dispatch
from huggingface_hub import hf_hub_download
model_name = "tiiuae/falcon-7b-instruct"
# Config
config = AutoConfig.from_pretrained(model_name, trust_remote_code=True)
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained(config.tokenizer_name)
# Model
w_bit = 4
q_config = {
"zero_point": True,
"q_group_size": 64,
}
load_quant = hf_hub_download('abhinavkulkarni/falcon-7b-instruct-w4-g64-awq', 'pytorch_model.bin')
with init_empty_weights():
model = AutoModelForCausalLM.from_pretrained(model_name, config=config,
torch_dtype=torch.float16, trust_remote_code=True)
real_quantize_model_weight(model, w_bit=w_bit, q_config=q_config, init_only=True)
model = load_checkpoint_and_dispatch(model, load_quant, device_map="balanced")
# Inference
prompt = f'''What is the difference between nuclear fusion and fission?
###Response:'''
input_ids = tokenizer(prompt, return_tensors='pt').input_ids.cuda()
output = model.generate(
inputs=input_ids,
temperature=0.7,
max_new_tokens=512,
top_p=0.15,
top_k=0,
repetition_penalty=1.1,
eos_token_id=tokenizer.eos_token_id
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## Evaluation
This evaluation was done using [LM-Eval](https://github.com/EleutherAI/lm-evaluation-harness).
[Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct)
| Task |Version| Metric | Value | |Stderr|
|--------|------:|---------------|------:|---|------|
|wikitext| 1|word_perplexity|14.5069| | |
| | |byte_perplexity| 1.6490| | |
| | |bits_per_byte | 0.7216| | |
[Falcon-7B-Instruct (4-bit 64-group AWQ)](https://huggingface.co/abhinavkulkarni/falcon-7b-instruct-w4-g64-awq)
| Task |Version| Metric | Value | |Stderr|
|--------|------:|---------------|------:|---|------|
|wikitext| 1|word_perplexity|14.8667| | |
| | |byte_perplexity| 1.6566| | |
| | |bits_per_byte | 0.7282| | |
## Acknowledgements
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
The model was quantized with AWQ technique. If you find AWQ useful or relevant to your research, please kindly cite the paper:
```
@article{lin2023awq,
title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration},
author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song},
journal={arXiv},
year={2023}
}
```
|