File size: 5,984 Bytes
1575adf
d95c54e
1575adf
 
 
 
 
 
 
 
 
 
 
 
 
4474d9e
1575adf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d95c54e
1575adf
 
 
 
 
 
 
 
 
d95c54e
1575adf
 
 
4474d9e
1575adf
 
 
 
 
 
4474d9e
1575adf
 
 
 
 
 
 
 
 
4474d9e
1575adf
 
 
 
 
 
 
 
 
4474d9e
 
1575adf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d95c54e
 
 
 
 
4474d9e
d95c54e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1575adf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4474d9e
1575adf
 
 
 
 
4474d9e
1575adf
 
 
 
 
 
 
 
 
 
4474d9e
1575adf
 
 
 
 
4474d9e
1575adf
 
 
 
 
 
 
 
 
4474d9e
1575adf
 
 
 
 
4474d9e
1575adf
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
---
license: apache-2.0
---

## Installation from source

```bash
git clone https://github.com/foundation-model-stack/fms-extras
cd fms-extras
pip install -e .
```


## Description

This model is intended to be used as an accelerator for [granite-7b-instruct](https://huggingface.co/ibm-granite/granite-7b-instruct) and takes inspiration 
from the Medusa speculative decoding architecture. This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts 
a single token in the draft based on both a state vector and sampled token
from the prior stage (the base model can be considered stage 0).
The state vector from the base model provides contextual information to the accelerator, 
while conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.

Note: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference. 
Training is light-weight and can be completed in only a few days depending on base model size and speed.

## Repository Links

1. [Paged Attention KV-Cache / Speculator](https://github.com/foundation-model-stack/fms-extras)
2. [Production Server with speculative decoding](https://github.com/IBM/text-generation-inference.git)
3. [Speculator training](https://github.com/foundation-model-stack/fms-fsdp/pull/35)

## Samples

_Note: For all samples, your environment must have access to cuda_

### Use in IBM Production TGIS

*To try this out running in a production-like environment, please use the pre-built docker image:*

#### Setup

```bash
HF_HUB_CACHE=/hf_hub_cache
chmod a+w $HF_HUB_CACHE
HF_HUB_TOKEN="your huggingface hub token"
TGIS_IMAGE=quay.io/wxpe/text-gen-server:main.ddc56ee

docker pull $TGIS_IMAGE

# optionally download granite-7b-instruct if the weights do not already exist
docker run --rm \
    -v $HF_HUB_CACHE:/models \
    -e HF_HUB_CACHE=/models \
    -e TRANSFORMERS_CACHE=/models \
    $TGIS_IMAGE \
    text-generation-server download-weights \
    ibm-granite/granite-7b-instruct \
    --token $HF_HUB_TOKEN

# optionally download the speculator model if the weights do not already exist
docker run --rm \
    -v $HF_HUB_CACHE:/models \
    -e HF_HUB_CACHE=/models \
    -e TRANSFORMERS_CACHE=/models \
    $TGIS_IMAGE \
    text-generation-server download-weights \
    ibm-granite/granite-7b-instruct-accelerator \
    --token $HF_HUB_TOKEN

# note: if the weights were downloaded separately (not with the above commands), please place them in the HF_HUB_CACHE directory and refer to them with /models/<model_name>
docker run -d --rm --gpus all \
    --name my-tgis-server \
    -p 8033:8033 \
    -v $HF_HUB_CACHE:/models \
    -e HF_HUB_CACHE=/models \
    -e TRANSFORMERS_CACHE=/models \
    -e MODEL_NAME=ibm-granite/granite-7b-instruct \
    -e SPECULATOR_NAME=ibm-granite/granite-7b-instruct-accelerator \
    -e FLASH_ATTENTION=true \
    -e PAGED_ATTENTION=true \
    -e DTYPE=float16 \
    $TGIS_IMAGE

# check logs and wait for "gRPC server started on port 8033" and "HTTP server started on port 3000"
docker logs my-tgis-server -f

# get the client sample (Note: The first prompt will take longer as there is a warmup time)
conda create -n tgis-client-env python=3.11
conda activate tgis-client-env
git clone --branch main --single-branch https://github.com/IBM/text-generation-inference.git
cd text-generation-inference/integration_tests
make gen-client
pip install . --no-cache-dir
```

#### Run Sample

```bash
python sample_client.py
```

_Note: first prompt may be slower as there is a slight warmup time_

### Use in Huggingface TGI

#### start the server

```bash
model=ibm-granite/granite-7b-instruct-accelerator
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id $model
```

_note: for tensor parallel, add --num-shard_

#### make a request

```bash
curl 127.0.0.1:8080/generate_stream \
    -X POST \
    -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
    -H 'Content-Type: application/json'
```

### Minimal Sample

*To try this out with the fms-native compiled model, please execute the following:*

#### Install

```bash
git clone --branch ibm_7b_instruct_lab_variant --single-branch https://github.com/JRosenkranz/fms-extras.git
(cd fms-extras && pip install -e .)
pip install transformers==4.35.0 sentencepiece numpy
```

#### Run Sample

##### batch_size=1 (compile + cudagraphs)

```bash
MODEL_PATH=/path/to/ibm-granite/granite-7b-instruct
python fms-extras/scripts/paged_speculative_inference.py \
    --variant=7b.ibm_instruct_lab \
    --model_path=$MODEL_PATH \
    --model_source=hf \
    --tokenizer=$MODEL_PATH \
    --speculator_path=ibm-granite/granite-7b-instruct-accelerator \
    --speculator_source=hf \
    --speculator_variant=1_4b \
    --top_k_tokens_per_head=4,3,2,2,2 \
    --compile \
    --compile_mode=reduce-overhead
```

##### batch_size=1 (compile)

```bash
MODEL_PATH=/path/to/ibm-granite/granite-7b-instruct
python fms-extras/scripts/paged_speculative_inference.py \
    --variant=7b.ibm_instruct_lab \
    --model_path=$MODEL_PATH \
    --model_source=hf \
    --tokenizer=$MODEL_PATH \
    --speculator_path=ibm-granite/granite-7b-instruct-accelerator \
    --speculator_source=hf \
    --speculator_variant=1_4b \
    --top_k_tokens_per_head=4,3,2,2,2 \
    --compile \
```

##### batch_size=4 (compile)

```bash
MODEL_PATH=/path/to/ibm-granite/granite-7b-instruct
python fms-extras/scripts/paged_speculative_inference.py \
    --variant=7b.ibm_instruct_lab \
    --model_path=$MODEL_PATH \
    --model_source=hf \
    --tokenizer=$MODEL_PATH \
    --speculator_path=ibm-granite/granite-7b-instruct-accelerator \
    --speculator_source=hf \
    --speculator_variant=1_4b \
    --top_k_tokens_per_head=4,3,2,2,2 \
    --batch_input \
    --compile \
```