Graph unlearning has emerged as a critical mechanism for supporting sustainable and privacy-preserving social networks, enabling models to remove the influence of deleted nodes and thereby better safeguard user information. However, we observe that existing graph unlearning techniques insufficiently protect sensitive attributes, often leading to degraded algorithmic fairness compared with traditional graph learning methods. To address this gap, we introduce FairGU, a fairness-aware graph unlearning framework designed to preserve both utility and fairness during the unlearning process. FairGU integrates a dedicated fairness-aware module with effective data protection strategies, ensuring that sensitive attributes are neither inadvertently amplified nor structurally exposed when nodes are removed. Through extensive experiments on multiple real-world datasets, we demonstrate that FairGU consistently outperforms state-of-the-art graph unlearning methods and fairness-enhanced graph learning baselines in terms of both accuracy and fairness metrics. Our findings highlight a previously overlooked risk in current unlearning practices and establish FairGU as a robust and equitable solution for the next generation of socially sustainable networked systems. The codes are available at https://github.com/LuoRenqiang/FairGU.
翻译:图遗忘已成为支持可持续且保护隐私的社交网络的关键机制,它使模型能够消除已删除节点的影响,从而更好地保护用户信息。然而,我们观察到现有的图遗忘技术在保护敏感属性方面存在不足,与传统图学习方法相比,常导致算法公平性下降。为填补这一空白,我们提出了FairGU,一种公平感知的图遗忘框架,旨在遗忘过程中同时保持效用与公平性。FairGU将专用的公平感知模块与有效的数据保护策略相结合,确保在节点移除时敏感属性既不会被无意放大,也不会在结构上暴露。通过在多个真实数据集上的广泛实验,我们证明FairGU在准确性和公平性指标上均持续优于最先进的图遗忘方法及公平性增强的图学习基线。我们的研究结果揭示了当前遗忘实践中一个先前被忽视的风险,并将FairGU确立为下一代社会可持续网络系统的稳健且公平的解决方案。代码可在https://github.com/LuoRenqiang/FairGU获取。