Generative Recommendation (GR), powered by Large Language Models (LLMs), represents a promising new paradigm for industrial recommender systems. However, their practical application is severely hindered by high inference latency, which makes them infeasible for high-throughput, real-time services and limits their overall business impact. While Speculative Decoding (SD) has been proposed to accelerate the autoregressive generation process, existing implementations introduce new bottlenecks: they typically require separate draft models and model-based verifiers, requiring additional training and increasing the latency overhead. In this paper, we address these challenges with NEZHA, a novel architecture that achieves hyperspeed decoding for GR systems without sacrificing recommendation quality. Specifically, NEZHA integrates a nimble autoregressive draft head directly into the primary model, enabling efficient self-drafting. This design, combined with a specialized input prompt structure, preserves the integrity of sequence-to-sequence generation. Furthermore, to tackle the critical problem of hallucination, a major source of performance degradation, we introduce an efficient, model-free verifier based on a hash set. We demonstrate the effectiveness of NEZHA through extensive experiments on public datasets and have successfully deployed the system on Taobao since October 2025, driving the billion-level advertising revenue and serving hundreds of millions of daily active users.
翻译:生成式推荐(Generative Recommendation, GR)依托大型语言模型(Large Language Models, LLMs),为工业级推荐系统提供了一种前景广阔的新范式。然而,其实际应用受到高推理延迟的严重阻碍,使其难以适用于高吞吐、实时的服务,并限制了其整体业务影响力。虽然推测解码(Speculative Decoding, SD)已被提出以加速自回归生成过程,但现有实现引入了新的瓶颈:它们通常需要独立的草稿模型和基于模型的验证器,这不仅需要额外的训练,还增加了延迟开销。本文通过NEZHA应对这些挑战,这是一种新颖的架构,可在不牺牲推荐质量的前提下,为GR系统实现超高速解码。具体而言,NEZHA将轻量级的自回归草稿头直接集成到主模型中,实现了高效的自草拟生成。这一设计结合专门的输入提示结构,保持了序列到序列生成的完整性。此外,针对幻觉这一导致性能下降的关键问题,我们引入了一种基于哈希集的高效、无模型验证器。我们通过在公开数据集上的大量实验证明了NEZHA的有效性,并已自2025年10月起成功在淘宝部署该系统,推动了数十亿级别的广告收入,为数亿日活跃用户提供服务。