Speculative decoding is commonly used for reducing the inference latency of large language models. Its effectiveness depends highly on the speculation lookahead (SL)-the number of tokens generated by the draft model at each iteration. In this work we show that the common practice of using the same SL for all iterations (static SL) is suboptimal. We introduce DISCO (DynamIc SpeCulation lookahead Optimization), a novel method for dynamically selecting the SL. Our experiments with four datasets show that DISCO reaches an average speedup of 10% compared to the best static SL baseline, while generating the exact same text.
翻译:推测解码通常用于降低大型语言模型的推理延迟。其效果高度依赖于前瞻推测量——即每次迭代中草稿模型生成的令牌数量。本研究表明,在所有迭代中使用相同前瞻推测量的常见做法(静态前瞻推测)并非最优。我们提出了DISCO(动态前瞻推测优化),一种动态选择前瞻推测量的新方法。在四个数据集上的实验表明,DISCO在生成完全相同的文本的同时,相比最佳静态前瞻推测基线平均实现了10%的加速效果。