We propose TPP-SD, a novel approach that accelerates Transformer temporal point process (TPP) sampling by adapting speculative decoding (SD) techniques from language models. By identifying the structural similarities between thinning algorithms for TPPs and speculative decoding for language models, we develop an efficient sampling framework that leverages a smaller draft model to generate multiple candidate events, which are then verified by the larger target model in parallel. TPP-SD maintains the same output distribution as autoregressive sampling while achieving significant acceleration. Experiments on both synthetic and real datasets demonstrate that our approach produces samples from identical distributions as standard methods, but with 2-6$\times$ speedup. Our ablation studies analyze the impact of hyperparameters such as draft length and draft model size on sampling efficiency. TPP-SD bridges the gap between powerful Transformer TPP models and the practical need for rapid sequence sampling.
翻译:我们提出了TPP-SD,这是一种通过借鉴语言模型中推测解码(SD)技术来加速Transformer时序点过程(TPP)采样的新方法。通过识别TPP的细化算法与语言模型的推测解码之间的结构相似性,我们开发了一种高效的采样框架。该框架利用一个较小的草稿模型生成多个候选事件,然后由较大的目标模型并行验证。TPP-SD保持了与自回归采样相同的输出分布,同时实现了显著的加速。在合成和真实数据集上的实验表明,我们的方法能够生成与标准方法相同分布的样本,但获得了2-6倍的加速。我们的消融研究分析了草稿长度和草稿模型大小等超参数对采样效率的影响。TPP-SD弥合了强大的Transformer TPP模型与实际快速序列采样需求之间的差距。