Graph neural networks (GNNs) have shown impressive performance in recommender systems, particularly in collaborative filtering (CF). The key lies in aggregating neighborhood information on a user-item interaction graph to enhance user/item representations. However, we have discovered that this aggregation mechanism comes with a drawback, which amplifies biases present in the interaction graph. For instance, a user's interactions with items can be driven by both unbiased true interest and various biased factors like item popularity or exposure. However, the current aggregation approach combines all information, both biased and unbiased, leading to biased representation learning. Consequently, graph-based recommenders can learn distorted views of users/items, hindering the modeling of their true preferences and generalizations. To address this issue, we introduce a novel framework called Adversarial Graph Dropout (AdvDrop). It differentiates between unbiased and biased interactions, enabling unbiased representation learning. For each user/item, AdvDrop employs adversarial learning to split the neighborhood into two views: one with bias-mitigated interactions and the other with bias-aware interactions. After view-specific aggregation, AdvDrop ensures that the bias-mitigated and bias-aware representations remain invariant, shielding them from the influence of bias. We validate AdvDrop's effectiveness on five public datasets that cover both general and specific biases, demonstrating significant improvements. Furthermore, our method exhibits meaningful separation of subgraphs and achieves unbiased representations for graph-based CF models, as revealed by in-depth analysis. Our code is publicly available at https://github.com/Arthurma71/AdvDrop.
翻译:图神经网络在推荐系统中表现出色,尤其在协同过滤领域。其关键在于通过用户-物品交互图上的邻域聚合增强用户/物品表征。然而我们发现该聚合机制存在缺陷——它会放大交互图中固有的偏差。例如,用户与物品的交互既可能源于无偏的真实兴趣,也可能受到物品流行度、曝光度等偏置因素的影响。但当前聚合方法将所有信息(包括有偏和无偏特征)混合处理,导致表征学习产生偏差。这使得基于图的推荐系统可能学习到用户/物品的扭曲视图,阻碍对其真实偏好和泛化能力的建模。为解决该问题,我们提出新颖框架——对抗图丢弃(AdvDrop)。该框架可区分无偏与有偏交互,实现无偏表征学习。对每个用户/物品,AdvDrop通过对抗学习将邻域分割为两个视角:消除偏置的交互视角与保留偏置的交互视角。经过特定视角的聚合后,AdvDrop确保消除偏置的表征与保留偏置的表征保持不变性,从而屏蔽偏置影响。我们在涵盖通用及特定偏置的五个公开数据集上验证了AdvDrop的有效性,实验表明其性能显著提升。深入分析揭示,该方法能实现子图的有意义分割,为基于图的协同过滤模型获得无偏表征。我们的代码公开于https://github.com/Arthurma71/AdvDrop。