Sequential recommendation focuses on mining useful patterns from the user behavior history to better estimate his preference on the candidate items. Previous solutions adopt recurrent networks or retrieval methods to obtain the user's profile representation so as to perform the preference estimation. In this paper, we propose a novel framework of sequential recommendation called Look into the Future (LIFT), which builds and leverages the contexts of sequential recommendation. The context in LIFT refers to a user's current profile that can be represented based on both past and future behaviors. As such, the learned context will be more effective in predicting the user's behaviors in sequential recommendation. Apparently, it is impossible to use real future information to predict the current behavior, we thus propose a novel retrieval-based framework to use the most similar interaction's future information as the future context of the target interaction without data leakage. Furthermore, in order to exploit the intrinsic information embedded within the context itself, we introduce an innovative pretraining methodology incorporating behavior masking. This approach is designed to facilitate the efficient acquisition of context representations. We demonstrate that finding relevant contexts from the global user pool via retrieval methods will greatly improve preference estimation performance. In our extensive experiments over real-world datasets, LIFT demonstrates significant performance improvement on click-through rate prediction tasks in sequential recommendation over strong baselines.
翻译:序列推荐旨在从用户行为历史中挖掘有效模式,以更准确地评估其对候选项目的偏好。现有方法通常采用循环网络或检索方法获取用户画像表征,进而进行偏好估计。本文提出一种名为"面向未来"(LIFT)的新型序列推荐框架,该框架构建并利用序列推荐的上下文信息。LIFT中的上下文指代基于用户过去与未来行为共同构建的当前画像表征。通过这种方式学习到的上下文能更有效地预测用户在序列推荐中的行为。显然,直接使用真实未来信息预测当前行为不可行,因此我们提出一种创新的检索式框架,通过选择最相似交互的未来信息作为目标交互的未来上下文,从而避免数据泄露。此外,为挖掘上下文本身蕴含的内在信息,我们引入结合行为掩码的创新预训练方法,该方法旨在高效获取上下文表征。我们证明,通过检索方法从全局用户池中寻找相关上下文能显著提升偏好估计性能。在多个真实数据集上的大量实验表明,LIFT在序列推荐的点击率预测任务上较现有强基线模型取得了显著性能提升。