Large Language Model (LLM)-based generative recommendation has achieved notable success, yet its practical deployment is costly particularly due to excessive inference latency caused by autoregressive decoding. For lossless LLM decoding acceleration, Speculative Decoding (SD) has emerged as a promising solution. However, applying SD to generative recommendation presents unique challenges due to the requirement of generating top-K items (i.e., K distinct token sequences) as a recommendation list by beam search. This leads to more stringent verification in SD, where all the top-K sequences from the target LLM must be successfully drafted by the draft model at each decoding step. To alleviate this, we consider 1) boosting top-K sequence alignment between the draft model and the target LLM, and 2) relaxing the verification strategy to reduce trivial LLM calls. To this end, we propose an alignment framework named AtSpeed, which presents the AtSpeed-S optimization objective for top-K alignment under the strict top-K verification. Moreover, we introduce a relaxed sampling verification strategy that allows high-probability non-top-K drafted sequences to be accepted, significantly reducing LLM calls. Correspondingly, we propose AtSpeed-R for top-K alignment under this relaxed sampling verification. Empirical results on two real-world datasets demonstrate that AtSpeed significantly accelerates LLM-based generative recommendation, e.g., near 2x speedup under strict top-K verification and up to 2.5 speedup under relaxed sampling verification. The codes and datasets will be released in the near future.
翻译:基于大语言模型(LLM)的生成式推荐已取得显著成功,但其实际部署成本高昂,尤其是自回归解码导致的过高推理延迟。为实现无损的LLM解码加速,推测解码(SD)已成为一种有前景的解决方案。然而,将SD应用于生成式推荐面临独特挑战,因为需要通过束搜索生成top-K项目(即K个不同的令牌序列)作为推荐列表。这导致SD中的验证过程更为严格,要求目标LLM的top-K序列在每一步解码时都必须由草稿模型成功生成。为缓解此问题,我们考虑:1)提升草稿模型与目标LLM之间的top-K序列对齐;2)放宽验证策略以减少不必要的LLM调用。为此,我们提出名为AtSpeed的对齐框架,该框架在严格的top-K验证下提出AtSpeed-S优化目标以实现top-K对齐。此外,我们引入一种宽松的采样验证策略,允许接受高概率的非top-K草稿序列,从而显著减少LLM调用。相应地,我们提出AtSpeed-R以在此宽松采样验证下实现top-K对齐。在两个真实数据集上的实证结果表明,AtSpeed能显著加速基于LLM的生成式推荐,例如在严格的top-K验证下实现近2倍加速,在宽松采样验证下实现最高2.5倍加速。代码与数据集将于近期发布。