Datasets:

ArXiv:
DOI:
joaogante HF staff commited on
Commit
070818e
1 Parent(s): 1fbda45

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +194 -0
README.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: "Assisted Generation: a new direction toward low-latency text generation"
3
+ thumbnail: /blog/assets/assisted-generation/thumbnail.png
4
+ authors:
5
+ - user: joaogante
6
+ ---
7
+
8
+ # Assisted Generation: a new direction toward low-latency text generation
9
+
10
+ <!-- {blog_metadata} -->
11
+ <!-- {authors} -->
12
+
13
+ Large language models are all the rage these days, with many companies investing significant resources to scale them up and unlock new capabilities. However, as humans with ever-decreasing attention spawns, we also dislike their slow response times. Latency is critical for a good user experience, and smaller models are often used despite their lower quality (e.g. in [code completion](https://ai.googleblog.com/2022/07/ml-enhanced-code-completion-improves.html)).
14
+
15
+ Why is text generation so slow? What’s preventing you from deploying low-latency large language models without going bankrupt? In this blog post, we will revisit the bottlenecks for autoregressive text generation and introduce a new decoding method to tackle the latency problem. You’ll see that by using our new method, assisted generation, you can reduce latency up to 10x in commodity hardware!
16
+
17
+ ## Understanding text generation latency
18
+
19
+ The core of modern text generation is straightforward to understand. Let’s look at the central piece, the ML model. Its input contains a text sequence, which includes the text generated so far, and potentially other model-specific components (for instance, Whisper also has an audio input). The model takes the input and runs a forward pass: the input is fed to the model and passed sequentially along its layers until the unnormalized log probabilities for the next token are predicted (also known as logits). A token may consist in entire words, sub-words, or even individual characters, depending on the model. The [illustrated GPT-2](https://jalammar.github.io/illustrated-gpt2/) is a great reference if you’d like to dive deeper into this part of text generation.
20
+
21
+ <!-- [GIF 1 -- FWD PASS] -->
22
+ <video autoplay loop muted playsinline src="/blog/assets/assisted-generation/gif_1_1080p.mov"></video>
23
+
24
+ A model forward pass gets you the logits for the next token, which you can freely manipulate (e.g. set the probability of undesirable words or sequences to 0). The following step in text generation is to select the next token from these logits. Common strategies include picking the most likely token, known as greedy decoding, or sampling from this distribution, also called multinomial sampling. Chaining model forward passes with next token selection iteratively gets you text generation. This explanation is the tip of the iceberg when it comes to decoding methods; please refer to [our blog post on text generation](https://huggingface.co/blog/how-to-generate) for an in-depth exploration.
25
+
26
+ <!-- [GIF 2 -- TEXT GENERATION] -->
27
+ <video autoplay loop muted playsinline src="/blog/assets/assisted-generation/gif_2_1080p.mov"></video>
28
+
29
+ From the description above, the latency bottleneck in text generation is clear: running a model forward pass for large models is slow, and you may need to do hundreds of them in a sequence. But let’s dive deeper: why are forward passes slow? Forward passes are typically dominated by matrix multiplications and, after a quick visit to the [corresponding wikipedia section](https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm#Communication-avoiding_and_distributed_algorithms), you call tell that memory bandwidth is the limitation in this operation (e.g. from the GPU RAM to the GPU compute cores). In other words, *the bottleneck in the forward pass comes from loading the model layer weights into the computation cores of your device, not from performing the computations themselves*.
30
+
31
+ At the moment, you have three main avenues you can explore to get the most out of text generation, all tackling the performance of the model forward pass. First, you have the hardware-specific model optimizations. For instance, your device may be compatible with [Flash Attention](https://github.com/HazyResearch/flash-attention), which speeds up the attention layer through a reorder of the operations, or [INT8 quantization](https://huggingface.co/blog/hf-bitsandbytes-integration), which reduces the size of the model weights.
32
+
33
+ Second, when you know you’ll get concurrent text generation requests, you can batch the inputs and massively increase the throughput with a small latency penalty. The model layer weights loaded into the device are now used on several input rows in parallel, which means that you’ll get more tokens out for approximately the same memory bandwidth burden. The catch with batching is that you need additional device memory (or to offload the memory somewhere) – at the end of this spectrum, you can see projects like [FlexGen](https://github.com/FMInference/FlexGen) which optimize throughput at the expense of latency.
34
+
35
+ ```python
36
+ # Example showcasing the impact of batched generation. Measurement device: RTX3090
37
+ from transformers import AutoModelForCausalLM, AutoTokenizer
38
+ import time
39
+
40
+ tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
41
+ model = AutoModelForCausalLM.from_pretrained("distilgpt2").to("cuda")
42
+ inputs = tokenizer(["Hello world"], return_tensors="pt").to("cuda")
43
+
44
+ def print_tokens_per_second(batch_size):
45
+ new_tokens = 100
46
+ cumulative_time = 0
47
+
48
+ # warmup
49
+ model.generate(
50
+ **inputs, do_sample=True, max_new_tokens=new_tokens, num_return_sequences=batch_size
51
+ )
52
+
53
+ for _ in range(10):
54
+ start = time.time()
55
+ model.generate(
56
+ **inputs, do_sample=True, max_new_tokens=new_tokens, num_return_sequences=batch_size
57
+ )
58
+ cumulative_time += time.time() - start
59
+ print(f"Tokens per second: {new_tokens * batch_size * 10 / cumulative_time:.1f}")
60
+
61
+ print_tokens_per_second(1) # Tokens per second: 418.3
62
+ print_tokens_per_second(64) # Tokens per second: 16266.2 (~39x more tokens per second)
63
+ ```
64
+
65
+ Finally, if you have multiple devices available to you, you can distribute the workload using [Tensor Parallelism](https://huggingface.co/docs/transformers/main/en/perf_train_gpu_many#tensor-parallelism) and obtain lower latency. With Tensor Parallelism, you split the memory bandwidth burden across multiple devices, but you now have to consider inter-device communication bottlenecks in addition to the monetary cost of running multiple devices. The benefits depend largely on the model size: models that easily fit on a single consumer device see very limited benefits. Taking the results from this [DeepSpeed blog post](https://www.microsoft.com/en-us/research/blog/deepspeed-accelerating-large-scale-model-inference-and-training-via-system-optimizations-and-compression/), you see that you can spread a 17B parameter model across 4 GPUs to reduce the latency by 1.5x (Figure 7).
66
+
67
+ These three types of improvements can be used in tandem, resulting in [high throughput solutions](https://github.com/huggingface/text-generation-inference). However, after applying hardware-specific optimizations, there are limited options to reduce latency – and the existing options are expensive. Let’s fix that!
68
+
69
+ ## Language decoder forward pass, revisited
70
+
71
+ You’ve read above that each model forward pass yields the logits for the next token, but that’s actually an incomplete description. During text generation, the typical iteration consists in the model receiving as input the latest generated token, plus cached internal computations for all other previous inputs, returning the next token logits. Caching is used to avoid redundant computations, resulting in faster forward passes, but it’s not mandatory (and can be used partially). When caching is disabled, the input contains the entire sequence of tokens generated so far and the output contains the logits corresponding to the next token for *all positions* in the sequence! The logits at position N correspond to the distribution for the next token if the input consisted in the first N tokens, ignoring all subsequent tokens in the sequence. In the particular case of greedy decoding, if you pass the generated sequence as input and apply the argmax operator to the resulting logits, you will obtain the generated sequence back.
72
+
73
+
74
+ ```python
75
+ from transformers import AutoModelForCausalLM, AutoTokenizer
76
+
77
+ tok = AutoTokenizer.from_pretrained("distilgpt2")
78
+ model = AutoModelForCausalLM.from_pretrained("distilgpt2")
79
+
80
+ inputs = tok(["The"], return_tensors="pt")
81
+ generated = model.generate(**inputs, do_sample=False, max_new_tokens=10)
82
+ forward_confirmation = model(generated).logits.argmax(-1)
83
+
84
+ # We exclude the opposing tips from each sequence: the forward pass returns
85
+ # the logits for the next token, so it is shifted by one position.
86
+ print(generated[:-1].tolist() == forward_confirmation[1:].tolist()) # True
87
+ ```
88
+
89
+
90
+ This means that you can use a model forward pass for a different purpose: in addition to feeding some tokens to predict the next one, you can also pass a sequence to the model and double-check whether the model would generate that same sequence (or part of it).
91
+
92
+
93
+ <!-- [GIF 3 -- FWD CONFIRMATION] -->
94
+ <video autoplay loop muted playsinline src="/blog/assets/assisted-generation/gif_3_1080p.mov"></video>
95
+
96
+
97
+ Let’s consider for a second that you have access to a magical latency-free oracle model that generates the same sequence as your model, for any given input. For argument’s sake, it can’t be used directly, it’s limited to being an assistant to your generation procedure. Using the property described above, you could use this assistant model to get candidate output tokens followed by a forward pass with your model to confirm that they are indeed correct. In this utopian scenario, the latency of text generation would be reduced from `O(n)` to `O(1)`, with `n` being the number of generated tokens. For long generations, we're talking about several orders of magnitude.
98
+
99
+ Walking a step towards reality, let's assume the assistant model has lost its oracle properties. Now it’s a latency-free model that gets some of the candidate tokens wrong, according to your model. Due to the autoregressive nature of the task, as soon as the assistant gets a token wrong, all subsequent candidates must be invalidated. However, that does not prevent you from querying the assistant again, after correcting the wrong token with your model, and repeating this process iteratively. Even if the assistant fails a few tokens, text generation would have an order of magnitude less latency than in its original form.
100
+
101
+ Obviously, there are no latency-free assistant models. Nevertheless, it is relatively easy to find a model that approximates some other model’s text generation outputs – smaller versions of the same architecture trained similarly often fit this property. Moreover, when the difference in model sizes becomes significant, the cost of using the smaller model as an assistant becomes an afterthought after factoring in the benefits of skipping a few forward passes! You now understand the core of _assisted generation_.
102
+
103
+ ## Greedy decoding with assisted generation
104
+
105
+ Assisted generation is a balancing act. You want the assistant to quickly generate a candidate sequence while being as accurate as possible. If the assistant has poor quality, your get the cost of using the assistant model with little to no benefits. On the other hand, optimizing the quality of the candidate sequences may imply the use of slow assistants, resulting in a net slowdown. While we can't automate the selection of the assistant model for you, we’ve included an additional requirement and a heuristic to ensure the time spent with the assistant stays in check.
106
+
107
+ First, the requirement – the assistant must have the exact same tokenizer as your model. If this requirement was not in place, expensive token decoding and re-encoding steps would have to be added. Furthermore, these additional steps would have to happen on the CPU, which in turn may need slow inter-device data transfers. Fast usage of the assistant is critical for the benefits of assisted generation to show up.
108
+
109
+ Finally, the heuristic. By this point, you have probably noticed the similarities between the movie Inception and assisted generation – you are, after all, running text generation inside text generation. There will be one assistant model forward pass per candidate token, and we know that forward passes are expensive. While you can’t know in advance the number of tokens that the assistant model will get right, you can keep track of this information and use it to limit the number of candidate tokens requested to the assistant – some sections of the output are easier to anticipate than others.
110
+
111
+ Wrapping all up, here’s our original implementation of the assisted generation loop ([code](https://github.com/huggingface/transformers/blob/849367ccf741d8c58aa88ccfe1d52d8636eaf2b7/src/transformers/generation/utils.py#L4064)):
112
+ 1. Use greedy decoding to generate a certain number of candidate tokens with the assistant model, producing `candidates`. The number of produced candidate tokens is initialized to `5` the first time assisted generation is called.
113
+ 2. Using our model, do a forward pass with `candidates`, obtaining `logits`.
114
+ 3. Use the token selection method (`.argmax()` for greedy search or `.multinomial()` for sampling) to get the `next_tokens` from `logits`.
115
+ 4. Compare `next_tokens` to `candidates` and get the number of matching tokens. Remember that this comparison has to be done with left-to-right causality: after the first mismatch, all candidates are invalidated.
116
+ 5. Use the number of matches to slice things up and discard variables related to unconfirmed candidate tokens. In essence, in `next_tokens`, keep the matching tokens plus the first divergent token (which our model generates from a valid candidate subsequence).
117
+ 6. Adjust the number of candidate tokens to be produced in the next iteration — our original heuristic increases it by `2` if ALL tokens match and decreases it by `1` otherwise.
118
+
119
+ <!-- [GIF 4 -- ASSISTED GENERATION] -->
120
+ <video autoplay loop muted playsinline src="/blog/assets/assisted-generation/gif_4_1080p.mov"></video>
121
+
122
+ We’ve designed the API in 🤗 Transformers such that this process is hassle-free for you. All you need to do is to pass the assistant model under the new `assistant_model` keyword argument and reap the latency gains! At the time of the release of this blog post, assisted generation is limited to a batch size of 1.
123
+
124
+
125
+ ```python
126
+ from transformers import AutoModelForCausalLM, AutoTokenizer
127
+ import torch
128
+
129
+ prompt = "Alice and Bob"
130
+ checkpoint = "EleutherAI/pythia-1.4b-deduped"
131
+ assistant_checkpoint = "EleutherAI/pythia-160m-deduped"
132
+ device = "cuda" if torch.cuda.is_available() else "cpu"
133
+
134
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
135
+ inputs = tokenizer(prompt, return_tensors="pt").to(device)
136
+
137
+ model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
138
+ assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint).to(device)
139
+ outputs = model.generate(**inputs, assistant_model=assistant_model)
140
+ print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
141
+ # ['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a']
142
+ ```
143
+
144
+
145
+ Is the additional internal complexity worth it? Let’s have a look at the latency numbers for the greedy decoding case (results for sampling are in the next section), considering a batch size of 1. These results were pulled directly out of 🤗 Transformers without any additional optimizations, so you should be able to reproduce them in your setup.
146
+
147
+
148
+ <!-- [SPACE WITH GREEDY DECODING PERFORMANCE NUMBERS] -->
149
+ <script type="module" src="https://gradio.s3-us-west-2.amazonaws.com/3.23.0/gradio.js"></script>
150
+
151
+ <gradio-app src="https://huggingface.co/spaces/joaogante/assisted_generation_benchmarks"></gradio-app>
152
+
153
+
154
+ Glancing at the collected numbers, we see that assisted generation can deliver significant latency reductions in diverse settings, but it is not a silver bullet – you should benchmark it before applying it to your use case. We can conclude that assisted generation:
155
+ 1. 🤏 Requires access to an assistant model that is at least an order of magnitude smaller than your model (the bigger the difference, the better);
156
+ 2. 🚀 Gets up to 3x speedups in the presence of INT8 and up to 2x otherwise, when the model fits in the GPU memory;
157
+ 3. 🤯 If you’re playing with models that do not fit in your GPU and are relying on memory offloading, you can see up to 10x speedups;
158
+ 4. 📄 Shines in input-grounded tasks, like automatic speech recognition or summarization.
159
+
160
+ ## Sample with assisted generation
161
+
162
+ Greedy decoding is suited for input-grounded tasks (automatic speech recognition, translation, summarization, ...) or factual knowledge-seeking. Open-ended tasks requiring large levels of creativity, such as most uses of a language model as a chatbot, should use sampling instead. Assisted generation is naturally designed for greedy decoding, but that doesn’t mean that you can’t use assisted generation with multinomial sampling!
163
+
164
+ Drawing samples from a probability distribution for the next token will cause our greedy assistant to fail more often, reducing its latency benefits. However, we can control how sharp the probability distribution for the next tokens is, using the temperature coefficient that’s present in most sampling-based applications. At one extreme, with temperatures close to 0, sampling will approximate greedy decoding, favoring the most likely token. At the other extreme, with the temperature set to values much larger than 1, sampling will be chaotic, drawing from a uniform distribution. Low temperatures are, therefore, more favorable to your assistant model, retaining most of the latency benefits from assisted generation, as we can see below.
165
+
166
+
167
+ <!-- [TEMPERATURE RESULTS, SHOW THAT LATENCY INCREASES STEADILY WITH TEMP] -->
168
+ <div align="center">
169
+ <img src="/blog/assets/assisted-generation/temperature.png"/>
170
+ </div>
171
+
172
+
173
+ Why do you see it for yourself, so get a feeling of assisted generation?
174
+
175
+
176
+ <!-- [DEMO] -->
177
+ <script type="module" src="https://gradio.s3-us-west-2.amazonaws.com/3.23.0/gradio.js"></script>
178
+
179
+ <gradio-app src="https://huggingface.co/spaces/joaogante/assisted_generation_demo"></gradio-app>
180
+
181
+
182
+ ## Future directions
183
+
184
+ Assisted generation shows that modern text generation strategies are ripe for optimization. Understanding that it is currently a memory-bound problem, not a compute-bound problem, allows us to apply simple heuristics to get the most out of the available memory bandwidth, alleviating the bottleneck. We believe that further refinement of the use of assistant models will get us even bigger latency reductions - for instance, we may be able to skip a few more forward passes if we request the assistant to generate several candidate continuations. Naturally, releasing high-quality small models to be used as assistants will be critical to realizing and amplifying the benefits.
185
+
186
+ Initially released under our 🤗 Transformers library, to be used with the `.generate()` function, we expect to offer it throughout the Hugging Face universe. Its implementation is also completely open-source so, if you’re working on text generation and not using our tools, feel free to use it as a reference.
187
+
188
+ Finally, assisted generation resurfaces a crucial question in text generation. The field has been evolving with the constraint where all new tokens are the result of a fixed amount of compute, for a given model. One token per homogeneous forward pass, in pure autoregressive fashion. This blog post reinforces the idea that it shouldn’t be the case: large subsections of the generated output can also be equally generated by models that are a fraction of the size. For that, we’ll need new model architectures and decoding methods – we’re excited to see what the future holds!
189
+
190
+ ## Acknowledgements
191
+
192
+ I'd like to thank Sylvain Gugger, Nicolas Patry, and Lewis Tunstall for sharing many valuable suggestions to improve this blog post. Finally, kudos to Chunte Lee for designing the gorgeous cover you can see in our web page.
193
+
194
+ <!-- [ADD CITATION INFO] -->