Click-through rate (CTR) prediction is crucial for personalized online services. Sample-level retrieval-based models, such as RIM, have demonstrated remarkable performance. However, they face challenges including inference inefficiency and high resource consumption due to the retrieval process, which hinder their practical application in industrial settings. To address this, we propose a universal plug-and-play \underline{r}etrieval-\underline{o}riented \underline{k}nowledge (\textbf{\name}) framework that bypasses the real retrieval process. The framework features a knowledge base that preserves and imitates the retrieved \& aggregated representations using a decomposition-reconstruction paradigm. Knowledge distillation and contrastive learning optimize the knowledge base, enabling the integration of retrieval-enhanced representations with various CTR models. Experiments on three large-scale datasets demonstrate \name's exceptional compatibility and performance, with the neural knowledge base serving as an effective surrogate for the retrieval pool. \name surpasses the teacher model while maintaining superior inference efficiency and demonstrates the feasibility of distilling knowledge from non-parametric methods using a parametric approach. These results highlight \name's strong potential for real-world applications and its ability to transform retrieval-based methods into practical solutions. Our implementation code is available to support reproducibility in \url{https://github.com/HSLiu-Initial/ROK.git}.
翻译:点击率(CTR)预测对于个性化在线服务至关重要。基于样本级检索的模型(如RIM)已展现出卓越性能。然而,由于检索过程的存在,它们面临着推理效率低下和资源消耗高等挑战,这阻碍了其在工业环境中的实际应用。为解决这一问题,我们提出了一种通用的即插即用\underline{面}向\underline{检}索的\underline{知}识(\textbf{ROK})框架,该框架绕过了真实的检索过程。该框架的核心是一个知识库,它通过分解-重构范式来保存并模仿检索与聚合后的表征。知识蒸馏和对比学习被用于优化该知识库,从而使得检索增强的表征能够与各种CTR模型集成。在三个大规模数据集上的实验证明了ROK卓越的兼容性和性能,其神经知识库可作为检索池的有效替代。ROK在保持优异推理效率的同时超越了教师模型,并展示了使用参数化方法从非参数化方法中蒸馏知识的可行性。这些结果突显了ROK在实际应用中的强大潜力及其将基于检索的方法转化为实用解决方案的能力。我们的实现代码已公开以支持可复现性,详见 \url{https://github.com/HSLiu-Initial/ROK.git}。