The distributionally robust optimization (DRO)-based graph neural network methods improve recommendation systems' out-of-distribution (OOD) generalization by optimizing the model's worst-case performance. However, these studies fail to consider the impact of noisy samples in the training data, which results in diminished generalization capabilities and lower accuracy. Through experimental and theoretical analysis, this paper reveals that current DRO-based graph recommendation methods assign greater weight to noise distribution, leading to model parameter learning being dominated by it. When the model overly focuses on fitting noise samples in the training data, it may learn irrelevant or meaningless features that cannot be generalized to OOD data. To address this challenge, we design a Distributionally Robust Graph model for OOD recommendation (DRGO). Specifically, our method first employs a simple and effective diffusion paradigm to alleviate the noisy effect in the latent space. Additionally, an entropy regularization term is introduced in the DRO objective function to avoid extreme sample weights in the worst-case distribution. Finally, we provide a theoretical proof of the generalization error bound of DRGO as well as a theoretical analysis of how our approach mitigates noisy sample effects, which helps to better understand the proposed framework from a theoretical perspective. We conduct extensive experiments on four datasets to evaluate the effectiveness of our framework against three typical distribution shifts, and the results demonstrate its superiority in both independently and identically distributed distributions (IID) and OOD.
翻译:基于分布鲁棒优化(DRO)的图神经网络方法通过优化模型在最坏情况下的性能,提升了推荐系统的外分布(OOD)泛化能力。然而,这些研究未能考虑训练数据中噪声样本的影响,这导致泛化能力下降和准确率降低。通过实验和理论分析,本文揭示了当前基于DRO的图推荐方法会赋予噪声分布更大的权重,导致模型参数学习被其主导。当模型过度关注拟合训练数据中的噪声样本时,可能会学习到与OOD数据无法泛化的无关或无效特征。为应对这一挑战,我们设计了一个用于OOD推荐的分布鲁棒图模型(DRGO)。具体而言,我们的方法首先采用一种简单有效的扩散范式来缓解潜在空间中的噪声效应。此外,在DRO目标函数中引入了一个熵正则化项,以避免最坏情况分布中出现极端的样本权重。最后,我们提供了DRGO泛化误差界的理论证明,以及关于我们的方法如何缓解噪声样本影响的理论分析,这有助于从理论视角更好地理解所提出的框架。我们在四个数据集上进行了广泛的实验,以评估我们的框架针对三种典型分布偏移的有效性,结果证明了其在独立同分布(IID)和OOD场景下的优越性。