Large language models (LLM) have recently attracted surging interest due to their outstanding capabilities across various domains. However, enabling efficient LLM inference is challenging due to its autoregressive decoding that generates tokens only one at a time. Although research works apply pruning or quantization to speed up LLM inference, they typically require fine-tuning the LLM, incurring significant time and economic costs. Meanwhile, speculative decoding has been proposed to use small speculative models (SSMs) to accelerate the inference of LLM. However, the low acceptance rate of SSM and the high verification cost of LLM prohibit further performance improvement of inference. In this paper, we propose Minions, an LLM inference system that accelerates LLM inference with a collective and adaptive speculative generation. Specifically, Minions proposes a majority-voted mechanism to leverage multiple SSMs to jointly speculate the outputs of LLM, which improves the inference performance without introducing prohibitive computation costs for LLM. To better trade off the number of tokens speculated from SSM and the verification cost of LLM, Minions proposes an adaptive mechanism to dynamically determine the optimal speculation length of SSM, which can achieve better inference performance across different models, datasets, and hyper-parameters. In addition, Minions decouples the SSM decoding and LLM verification efficiently and adopts a pipelined execution mechanism to further improve the inference performance of LLM. By comparing with the state-of-the-art LLM inference systems, we demonstrate that Minions can achieve higher inference throughput and lower inference time.
翻译:大语言模型(LLM)因其在各个领域的卓越能力,近期引起了广泛关注。然而,由于其自回归解码机制每次仅生成一个令牌,实现高效的LLM推理具有挑战性。尽管已有研究工作应用剪枝或量化技术来加速LLM推理,但这些方法通常需要对LLM进行微调,产生显著的时间和经济成本。与此同时,推测解码被提出使用小型推测模型(SSM)来加速LLM推理。然而,SSM的低接受率和LLM的高验证成本阻碍了推理性能的进一步提升。本文提出Minions,一种利用集体自适应推测生成来加速LLM推理的系统。具体而言,Minions提出一种多数表决机制,利用多个SSM联合推测LLM的输出,从而在不引入过高LLM计算成本的情况下提升推理性能。为了更好地权衡从SSM推测的令牌数量与LLM的验证成本,Minions提出一种自适应机制,动态确定SSM的最优推测长度,这能够在不同模型、数据集和超参数下实现更好的推理性能。此外,Minions高效地解耦了SSM解码与LLM验证,并采用流水线执行机制以进一步提升LLM的推理性能。通过与最先进的LLM推理系统进行比较,我们证明Minions能够实现更高的推理吞吐量和更低的推理时间。