Accelerating the inference of large language models (LLMs) has been a critical challenge in generative AI. Speculative decoding (SD) substantially improves LLM inference efficiency. However, its utility is limited by a fundamental constraint: the draft and target models must share the same vocabulary, thus limiting the herd of available draft models and often necessitating the training of a new model from scratch. Inspired by Dynamic Time Warping (DTW), a classic algorithm for aligning time series, we propose the algorithm TokenTiming for universal speculative decoding. It operates by re-encoding the draft token sequence to get a new target token sequence, and then uses DTW to build a mapping to transfer the probability distributions for speculative sampling. Benefiting from this, our method accommodates mismatched vocabularies and works with any off-the-shelf models without retraining and modification. We conduct comprehensive experiments on various tasks, demonstrating 1.57x speedup. This work enables a universal approach for draft model selection, making SD a more versatile and practical tool for LLM acceleration.
翻译:加速大型语言模型(LLM)的推理一直是生成式人工智能领域的关键挑战。推测解码(SD)显著提升了LLM的推理效率。然而,其应用受到一个根本性限制:草稿模型与目标模型必须共享相同的词表,这极大地限制了可用草稿模型的选择范围,并常常需要从头训练新模型。受经典时间序列对齐算法动态时间规整(DTW)的启发,我们提出了用于通用推测解码的算法TokenTiming。该算法通过对草稿词元序列进行重新编码以获得新的目标词元序列,随后利用DTW构建映射关系以传递用于推测采样的概率分布。得益于此,我们的方法能够兼容词表不匹配的情况,并可直接应用于任何现成的模型而无需重新训练或修改。我们在多种任务上进行了全面实验,实现了1.57倍的加速效果。这项工作为草稿模型选择提供了一种通用方法,使得SD成为LLM加速中更具普适性和实用性的工具。