Sequential recommendation aims to estimate how a user's interests evolve over time via uncovering valuable patterns from user behavior history. Many previous sequential models have solely relied on users' historical information to model the evolution of their interests, neglecting the crucial role that future information plays in accurately capturing these dynamics. However, effectively incorporating future information in sequential modeling is non-trivial since it is impossible to make the current-step prediction for any target user by leveraging his future data. In this paper, we propose a novel framework of sequential recommendation called Look into the Future (LIFT), which builds and leverages the contexts of sequential recommendation. In LIFT, the context of a target user's interaction is represented based on i) his own past behaviors and ii) the past and future behaviors of the retrieved similar interactions from other users. As such, the learned context will be more informative and effective in predicting the target user's behaviors in sequential recommendation without temporal data leakage. Furthermore, in order to exploit the intrinsic information embedded within the context itself, we introduce an innovative pretraining methodology incorporating behavior masking. In our extensive experiments on five real-world datasets, LIFT achieves significant performance improvement on click-through rate prediction and rating prediction tasks in sequential recommendation over strong baselines, demonstrating that retrieving and leveraging relevant contexts from the global user pool greatly benefits sequential recommendation. The experiment code is provided at https://anonymous.4open.science/r/LIFT-277C/Readme.md.
翻译:序列推荐旨在通过从用户行为历史中挖掘有价值的模式,来估计用户兴趣随时间演变的趋势。以往许多序列模型仅依赖用户的历史信息来建模其兴趣演变,忽略了未来信息在准确捕捉这些动态变化中的关键作用。然而,在序列建模中有效融入未来信息并非易事,因为无法利用目标用户的未来数据来执行当前步骤的预测。本文提出了一种新颖的序列推荐框架——面向未来(LIFT),该框架构建并利用了序列推荐的上下文信息。在LIFT中,目标用户交互的上下文基于以下两方面进行表征:i)该用户自身的历史行为;ii)从其他用户中检索出的相似交互的历史与未来行为。通过这种方式,学习到的上下文将更具信息量且更有效,能够在避免时序数据泄露的前提下,预测目标用户在序列推荐中的行为。此外,为了挖掘上下文本身所蕴含的内在信息,我们引入了一种结合行为掩码的创新预训练方法。在五个真实数据集上的大量实验中,LIFT在序列推荐的点击率预测和评分预测任务上均显著超越了强基线模型,这表明从全局用户池中检索并利用相关上下文能够极大地提升序列推荐的性能。实验代码发布于 https://anonymous.4open.science/r/LIFT-277C/Readme.md。