unsubscribe
commited on
Commit
•
673d516
1
Parent(s):
351a98b
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,80 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
<div align="center">
|
5 |
+
<img src="https://raw.githubusercontent.com/InternLM/lmdeploy/0be9e7ab6fe9a066cfb0a09d0e0c8d2e28435e58/resources/lmdeploy-logo.svg" width="450"/>
|
6 |
+
</div>
|
7 |
+
|
8 |
+
# INT4 Weight-only Quantization and Deployment (W4A16)
|
9 |
+
|
10 |
+
LMDeploy adopts [AWQ](https://arxiv.org/abs/2306.00978) algorithm for 4bit weight-only quantization. By developed the high-performance cuda kernel, the 4bit quantized model inference achieves up to 2.4x faster than FP16.
|
11 |
+
|
12 |
+
LMDeploy supports the following NVIDIA GPU for W4A16 inference:
|
13 |
+
|
14 |
+
- Turing(sm75): 20 series, T4
|
15 |
+
|
16 |
+
- Ampere(sm80,sm86): 30 series, A10, A16, A30, A100
|
17 |
+
|
18 |
+
- Ada Lovelace(sm90): 40 series
|
19 |
+
|
20 |
+
Before proceeding with the quantization and inference, please ensure that lmdeploy is installed.
|
21 |
+
|
22 |
+
```shell
|
23 |
+
pip install lmdeploy[all]
|
24 |
+
```
|
25 |
+
|
26 |
+
This article comprises the following sections:
|
27 |
+
|
28 |
+
<!-- toc -->
|
29 |
+
|
30 |
+
- [Inference](#inference)
|
31 |
+
- [Evaluation](#evaluation)
|
32 |
+
- [Service](#service)
|
33 |
+
|
34 |
+
<!-- tocstop -->
|
35 |
+
## Inference
|
36 |
+
For lmdeploy v0.5.0, please configure the chat template config first. Create the following JSON file `chat_template.json`.
|
37 |
+
```json
|
38 |
+
{
|
39 |
+
"model_name":"internlm2",
|
40 |
+
"meta_instruction":"你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。",
|
41 |
+
"stop_words":["<|im_start|>", "<|im_end|>"]
|
42 |
+
}
|
43 |
+
```
|
44 |
+
|
45 |
+
Trying the following codes, you can perform the batched offline inference with the quantized model:
|
46 |
+
|
47 |
+
```python
|
48 |
+
from lmdeploy import pipeline
|
49 |
+
from lmdeploy.model import ChatTemplateConfig
|
50 |
+
from lmdeploy.vl import load_image
|
51 |
+
|
52 |
+
model = 'OpenGVLab/InternVL2-2B-AWQ'
|
53 |
+
chat_template_config = ChatTemplateConfig.from_json('chat_template.json')
|
54 |
+
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
|
55 |
+
pipe = pipeline(model, chat_template_config=chat_template_config, log_level='INFO')
|
56 |
+
response = pipe(('describe this image', image))
|
57 |
+
print(response)
|
58 |
+
```
|
59 |
+
|
60 |
+
For more information about the pipeline parameters, please refer to [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/inference/pipeline.md).
|
61 |
+
|
62 |
+
## Evaluation
|
63 |
+
|
64 |
+
Please overview [this guide](https://opencompass.readthedocs.io/en/latest/advanced_guides/evaluation_turbomind.html) about model evaluation with LMDeploy.
|
65 |
+
|
66 |
+
## Service
|
67 |
+
|
68 |
+
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
|
69 |
+
|
70 |
+
```shell
|
71 |
+
lmdeploy serve api_server OpenGVLab/InternVL-Chat-V1-5-AWQ --backend turbomind --model-format awq --chat-template chat_template.json
|
72 |
+
```
|
73 |
+
|
74 |
+
The default port of `api_server` is `23333`. After the server is launched, you can communicate with server on terminal through `api_client`:
|
75 |
+
|
76 |
+
```shell
|
77 |
+
lmdeploy serve api_client http://0.0.0.0:23333
|
78 |
+
```
|
79 |
+
|
80 |
+
You can overview and try out `api_server` APIs online by swagger UI at `http://0.0.0.0:23333`, or you can also read the API specification from [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/serving/restful_api.md).
|