The historical interaction sequences of users plays a crucial role in training recommender systems that can accurately predict user preferences. However, due to the arbitrariness of user behavior, the presence of noise in these sequences poses a challenge to predicting their next actions in recommender systems. To address this issue, our motivation is based on the observation that training noisy sequences and clean sequences (sequences without noise) with equal weights can impact the performance of the model. We propose a novel self-supervised Auxiliary Task Joint Training (ATJT) method aimed at more accurately reweighting noisy sequences in recommender systems. Specifically, we strategically select subsets from users' original sequences and perform random replacements to generate artificially replaced noisy sequences. Subsequently, we perform joint training on these artificially replaced noisy sequences and the original sequences. Through effective reweighting, we incorporate the training results of the noise recognition model into the recommender model. We evaluate our method on three datasets using a consistent base model. Experimental results demonstrate the effectiveness of introducing self-supervised auxiliary task to enhance the base model's performance.
翻译:用户的历史交互序列在训练能够准确预测用户偏好的推荐系统中起着至关重要的作用。然而,由于用户行为的随意性,这些序列中存在的噪声对预测其在推荐系统中的后续行为构成了挑战。为解决此问题,我们的动机基于以下观察:以同等权重训练噪声序列和干净序列(无噪声序列)会影响模型的性能。我们提出了一种新颖的自监督辅助任务联合训练方法,旨在更准确地为推荐系统中的噪声序列重新分配权重。具体而言,我们从用户的原始序列中策略性地选择子集并进行随机替换,以生成人工替换的噪声序列。随后,我们对这些人工替换的噪声序列和原始序列进行联合训练。通过有效的重加权,我们将噪声识别模型的训练结果整合到推荐模型中。我们在三个数据集上使用一致的基准模型评估了我们的方法。实验结果表明,引入自监督辅助任务能够有效提升基准模型的性能。