TPI-LLM: Serving 70B-scale LLMs Efficiently on Low-resource Edge Devices
Abstract
Large model inference is shifting from cloud to edge due to concerns about the privacy of user interaction data. However, edge devices often struggle with limited computing power, memory, and bandwidth, requiring collaboration across multiple devices to run and speed up LLM inference. Pipeline parallelism, the mainstream solution, is inefficient for single-user scenarios, while tensor parallelism struggles with frequent communications. In this paper, we argue that tensor parallelism can be more effective than pipeline on low-resource devices, and present a compute- and memory-efficient tensor parallel inference system, named TPI-LLM, to serve 70B-scale models. TPI-LLM keeps sensitive raw data local in the users' devices and introduces a sliding window memory scheduler to dynamically manage layer weights during inference, with disk I/O latency overlapped with the computation and communication. This allows larger models to run smoothly on memory-limited devices. We analyze the communication bottleneck and find that link latency, not bandwidth, emerges as the main issue, so a star-based allreduce algorithm is implemented. Through extensive experiments on both emulated and real testbeds, TPI-LLM demonstrated over 80% less time-to-first-token and token latency compared to Accelerate, and over 90% compared to Transformers and Galaxy, while cutting the peak memory footprint of Llama 2-70B by 90%, requiring only 3.1 GB of memory for 70B-scale models.
Community
Nice work!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- NanoFlow: Towards Optimal Large Language Model Serving Throughput (2024)
- Training Ultra Long Context Language Model with Fully Pipelined Distributed Transformer (2024)
- Kraken: Inherently Parallel Transformers For Efficient Multi-Device Inference (2024)
- InstInfer: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference (2024)
- Mnemosyne: Parallelization Strategies for Efficiently Serving Multi-Million Context Length LLM Inference Requests Without Approximations (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Hey, this paper was a great read. We wrote a summary blog about this paper and a few more like
- TPI LLM
- Differential Transformer
- ARIA
You can find it here. Please give it a read :)
Thank you for the great summary. It's clear and concise. Actually we are continuing to develop it, and our current results show that token latency can be reduced to less than one second. We will let you know once we open source the new work.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper