In recommendation systems, scaling up feature-interaction modules (e.g., Wukong, RankMixer) or user-behavior sequence modules (e.g., LONGER) has achieved notable success. However, these efforts typically proceed on separate tracks, which not only hinders bidirectional information exchange but also prevents unified optimization and scaling. In this paper, we propose OneTrans, a unified Transformer backbone that simultaneously performs user-behavior sequence modeling and feature interaction. OneTrans employs a unified tokenizer to convert both sequential and non-sequential attributes into a single token sequence. The stacked OneTrans blocks share parameters across similar sequential tokens while assigning token-specific parameters to non-sequential tokens. Through causal attention and cross-request KV caching, OneTrans enables precomputation and caching of intermediate representations, significantly reducing computational costs during both training and inference. Experimental results on industrial-scale datasets demonstrate that OneTrans scales efficiently with increasing parameters, consistently outperforms strong baselines, and yields a 5.68% lift in per-user GMV in online A/B tests.
翻译:在推荐系统中,扩大特征交互模块(如Wukong、RankMixer)或用户行为序列模块(如LONGER)的规模已取得显著成功。然而,这些工作通常沿着独立的技术路线发展,这不仅阻碍了双向信息交换,也妨碍了统一的优化与扩展。本文提出OneTrans,一种统一的Transformer主干网络,可同时执行用户行为序列建模与特征交互。OneTrans采用统一的标记生成器,将序列属性和非序列属性共同转换为单一标记序列。堆叠的OneTrans模块在相似的序列标记间共享参数,同时为非序列标记分配标记特定的参数。通过因果注意力机制与跨请求KV缓存技术,OneTrans实现了中间表示量的预计算与缓存,从而显著降低了训练和推理阶段的计算成本。在工业级数据集上的实验结果表明,OneTrans能够随着参数增加高效扩展,持续超越强基线模型,并在在线A/B测试中实现了人均GMV 5.68%的提升。