Traditional Deep Learning Recommendation Models (DLRMs) face increasing bottlenecks in performance and efficiency, often struggling with generalization and long-sequence modeling. Inspired by the scaling success of Large Language Models (LLMs), we propose Generative Ranking for Ads at Baidu (GRAB), an end-to-end generative framework for Click-Through Rate (CTR) prediction. GRAB integrates a novel Causal Action-aware Multi-channel Attention (CamA) mechanism to effectively capture temporal dynamics and specific action signals within user behavior sequences. Full-scale online deployment demonstrates that GRAB significantly outperforms established DLRMs, delivering a 3.05% increase in revenue and a 3.49% rise in CTR. Furthermore, the model demonstrates desirable scaling behavior: its expressive power shows a monotonic and approximately linear improvement as longer interaction sequences are utilized.
翻译:传统深度学习推荐模型在性能和效率方面面临日益严峻的瓶颈,普遍存在泛化能力不足和长序列建模困难的问题。受大语言模型规模化成功经验的启发,我们提出面向百度广告的生成式排序框架GRAB,这是一种用于点击率预测的端到端生成式框架。GRAB集成了一种新颖的因果动作感知多通道注意力机制,能够有效捕捉用户行为序列中的时序动态特性与特定动作信号。全规模线上部署表明,GRAB显著优于现有深度学习推荐模型,实现了3.05%的收益提升和3.49%的点击率增长。此外,该模型展现出理想的扩展特性:当采用更长的交互序列时,其表达能力呈现单调且近似线性的提升趋势。