Efficiently scaling industrial Click-Through Rate (CTR) prediction has recently attracted significant research attention. Existing approaches typically employ early aggregation of user behaviors to maintain efficiency. However, such non-unified or partially unified modeling creates an information bottleneck by discarding fine-grained, token-level signals essential for unlocking scaling gains. In this work, we revisit the fundamental distinctions between CTR prediction and Large Language Models (LLMs), identifying two critical properties: the asymmetry in information density between behavioral and non-behavioral features, and the modality-specific priors of content-rich signals. Accordingly, we propose the Efficiently Scalable Transformer (EST), which achieves fully unified modeling by processing all raw inputs in a single sequence without lossy aggregation. EST integrates two modules: Lightweight Cross-Attention (LCA), which prunes redundant self-interactions to focus on high-impact cross-feature dependencies, and Content Sparse Attention (CSA), which utilizes content similarity to dynamically select high-signal behaviors. Extensive experiments show that EST exhibits a stable and efficient power-law scaling relationship, enabling predictable performance gains with model scale. Deployed on Taobao's display advertising platform, EST significantly outperforms production baselines, delivering a 3.27\% RPM (Revenue Per Mile) increase and a 1.22\% CTR lift, establishing a practical pathway for scalable industrial CTR prediction models.
翻译:高效扩展工业级点击率预测模型近期受到广泛研究关注。现有方法通常采用用户行为的早期聚合以维持效率。然而,这种非统一或部分统一的建模方式会丢弃细粒度、令牌级别的关键信号,形成信息瓶颈,阻碍扩展收益的实现。本研究重新审视点击率预测与大型语言模型之间的本质差异,识别出两个关键特性:行为特征与非行为特征间信息密度的不对称性,以及内容丰富信号特有的模态先验。基于此,我们提出高效可扩展Transformer模型,该模型通过将全部原始输入处理为单一序列而无损聚合,实现了完全统一的建模。EST整合了两个核心模块:轻量级交叉注意力机制通过剪枝冗余的自交互以聚焦高影响力的跨特征依赖关系;内容稀疏注意力机制则利用内容相似性动态筛选高信号强度的行为特征。大量实验表明,EST展现出稳定高效的能量律扩展关系,能够实现随模型规模可预测的性能提升。在淘宝展示广告平台的实际部署中,EST显著超越生产基线模型,实现了3.27%的千次展示收入提升与1.22%的点击率增长,为可扩展工业级点击率预测模型提供了切实可行的技术路径。