Recommender systems suffer from biases that cause the collected feedback to incompletely reveal user preference. While debiasing learning has been extensively studied, they mostly focused on the specialized (called counterfactual) test environment simulated by random exposure of items, significantly degrading accuracy in the typical (called factual) test environment based on actual user-item interactions. In fact, each test environment highlights the benefit of a different aspect: the counterfactual test emphasizes user satisfaction in the long-terms, while the factual test focuses on predicting subsequent user behaviors on platforms. Therefore, it is desirable to have a model that performs well on both tests rather than only one. In this work, we introduce a new learning framework, called Bias-adaptive Preference distillation Learning (BPL), to gradually uncover user preferences with dual distillation strategies. These distillation strategies are designed to drive high performance in both factual and counterfactual test environments. Employing a specialized form of teacher-student distillation from a biased model, BPL retains accurate preference knowledge aligned with the collected feedback, leading to high performance in the factual test. Furthermore, through self-distillation with reliability filtering, BPL iteratively refines its knowledge throughout the training process. This enables the model to produce more accurate predictions across a broader range of user-item combinations, thereby improving performance in the counterfactual test. Comprehensive experiments validate the effectiveness of BPL in both factual and counterfactual tests. Our implementation is accessible via: https://github.com/SeongKu-Kang/BPL.
翻译:推荐系统存在多种偏差,导致收集到的用户反馈无法完整揭示其真实偏好。尽管去偏差学习已被广泛研究,现有方法大多专注于通过随机曝光项目模拟的专用(称为反事实)测试环境,这导致其在基于真实用户-项目交互的典型(称为事实性)测试环境中的准确性显著下降。实际上,每种测试环境凸显了不同维度的优势:反事实测试强调长期用户满意度,而事实性测试侧重于预测平台上用户的后续行为。因此,理想的模型应在两种测试中均表现优异,而非仅擅长其一。本研究提出了一种名为偏差自适应偏好蒸馏学习(BPL)的新型学习框架,通过双重蒸馏策略逐步揭示用户偏好。这些蒸馏策略旨在驱动模型在事实性与反事实性测试环境中均实现高性能。BPL采用基于偏差模型的师生蒸馏特殊形式,保留与收集反馈一致的高精度偏好知识,从而在事实性测试中取得优异表现。此外,通过结合可靠性过滤的自蒸馏机制,BPL在训练过程中持续迭代优化知识体系,使模型能在更广泛的用户-项目组合中生成更精准的预测,进而提升反事实测试性能。综合实验验证了BPL在两种测试环境中的有效性。实现代码可通过以下链接获取:https://github.com/SeongKu-Kang/BPL。