Sequential recommendation (SR) aims to predict a user's next action by learning from their historical interaction sequences. In real-world applications, these models require periodic updates to adapt to new interactions and evolving user preferences. While incremental learning methods facilitate these updates, they face significant challenges. Replay-based approaches incur high memory and computational costs, and regularization-based methods often struggle to discard outdated or conflicting knowledge. To overcome these challenges, we propose SA-CAISR, a Stage-Adaptive and Conflict-Aware Incremental Sequential Recommendation framework. As a buffer-free framework, SA-CAISR operates using only the old model and new data, directly addressing the high costs of replay-based techniques. SA-CAISR introduces a novel Fisher-weighted knowledge-screening mechanism that dynamically identifies outdated knowledge by estimating parameter-level conflicts between the old model and new data, allowing our approach to selectively remove obsolete knowledge while preserving compatible historical patterns. This dynamic balance between stability and adaptability allows our method to achieve a new state-of-the-art performance in incremental SR. Specifically, SA-CAISR improves Recall@20 by 2.0%, MRR@20 by 1.2%, and NDCG@20 by 1.4% on average across datasets, while reducing memory usage by 97.5% and training time by 46.9% compared to the best baselines. This efficiency allows real-world systems to rapidly update user profiles with minimal computational overhead, ensuring more timely and accurate recommendations.
翻译:序列推荐(SR)旨在通过学习用户的历史交互序列来预测其下一步行为。在实际应用中,此类模型需要定期更新以适应新的交互和不断演化的用户偏好。尽管增量学习方法促进了这些更新,但它们面临着重大挑战:基于回放的方法会产生高昂的内存与计算成本,而基于正则化的方法往往难以舍弃过时或冲突的知识。为克服这些挑战,我们提出了SA-CAISR,一种阶段自适应与冲突感知的增量式序列推荐框架。作为一个无缓冲区的框架,SA-CAISR仅使用旧模型和新数据进行操作,直接解决了基于回放技术的高成本问题。SA-CAISR引入了一种新颖的费舍尔加权知识筛选机制,通过估计旧模型与新数据之间的参数级冲突来动态识别过时知识,从而使我们的方法能够选择性移除陈旧知识,同时保留兼容的历史模式。这种稳定性与适应性之间的动态平衡使我们的方法在增量式序列推荐中实现了新的最优性能。具体而言,与最佳基线方法相比,SA-CAISR在多个数据集上平均将Recall@20提升2.0%、MRR@20提升1.2%、NDCG@20提升1.4%,同时内存使用降低97.5%,训练时间减少46.9%。这种高效性使得实际系统能够以最小的计算开销快速更新用户画像,从而确保更及时、更准确的推荐。