Graph neural network (GNN) link prediction is increasingly deployed in citation, collaboration, and online social networks to recommend academic literature, collaborators, and friends. While prior research has investigated the dyadic fairness of GNN link prediction, the within-group (e.g., queer women) fairness and "rich get richer" dynamics of link prediction remain underexplored. However, these aspects have significant consequences for degree and power imbalances in networks. In this paper, we shed light on how degree bias in networks affects Graph Convolutional Network (GCN) link prediction. In particular, we theoretically uncover that GCNs with a symmetric normalized graph filter have a within-group preferential attachment bias. We validate our theoretical analysis on real-world citation, collaboration, and online social networks. We further bridge GCN's preferential attachment bias with unfairness in link prediction and propose a new within-group fairness metric. This metric quantifies disparities in link prediction scores within social groups, towards combating the amplification of degree and power disparities. Finally, we propose a simple training-time strategy to alleviate within-group unfairness, and we show that it is effective on citation, social, and credit networks.
翻译:图神经网络(GNN)链接预测日益被应用于引文、合作与在线社交网络中,用于推荐学术文献、合作者及好友。尽管已有研究探讨了GNN链接预测的二元公平性,但群体内(如酷儿女性的)公平性以及链接预测中“富者愈富”的动力学机制仍未得到充分探索。然而,这些方面对网络中的度与权力失衡具有重要影响。本文揭示了网络中度的偏差如何影响图卷积网络(GCN)链接预测。特别地,我们从理论上发现,采用对称归一化图滤波器的GCN存在群体内偏好依附偏差。我们通过现实世界的引文、合作与在线社交网络验证了理论分析结果。进一步,我们将GCN的偏好依附偏差与链接预测中的不公平性相关联,并提出一种新的群体内公平性度量指标。该指标可量化社交群体内链接预测得分的差异,旨在抑制度与权力差异的放大效应。最后,我们提出一种简单的训练阶段策略来缓解群体内不公平性,并证明其在引文、社交与信用网络上的有效性。