Driven by the powerful representation ability of Graph Neural Networks (GNNs), plentiful GNN models have been widely deployed in many real-world applications. Nevertheless, due to distribution disparities between different demographic groups, fairness in high-stake decision-making systems is receiving increasing attention. Although lots of recent works devoted to improving the fairness of GNNs and achieved considerable success, they all require significant architectural changes or additional loss functions requiring more hyper-parameter tuning. Surprisingly, we find that simple re-balancing methods can easily match or surpass existing fair GNN methods. We claim that the imbalance across different demographic groups is a significant source of unfairness, resulting in imbalanced contributions from each group to the parameters updating. However, these simple re-balancing methods have their own shortcomings during training. In this paper, we propose FairGB, Fair Graph Neural Network via re-Balancing, which mitigates the unfairness of GNNs by group balancing. Technically, FairGB consists of two modules: counterfactual node mixup and contribution alignment loss. Firstly, we select counterfactual pairs across inter-domain and inter-class, and interpolate the ego-networks to generate new samples. Guided by analysis, we can reveal the debiasing mechanism of our model by the causal view and prove that our strategy can make sensitive attributes statistically independent from target labels. Secondly, we reweigh the contribution of each group according to gradients. By combining these two modules, they can mutually promote each other. Experimental results on benchmark datasets show that our method can achieve state-of-the-art results concerning both utility and fairness metrics. Code is available at https://github.com/ZhixunLEE/FairGB.
翻译:受图神经网络(GNNs)强大表示能力的驱动,大量GNN模型已广泛应用于众多现实场景中。然而,由于不同人口统计群体间的分布差异,高风险决策系统中的公平性日益受到关注。尽管近期许多研究致力于提升GNN的公平性并取得了显著成果,但这些方法均需大幅调整模型架构或引入需要额外超参数调优的损失函数。令人惊讶的是,我们发现简单的重平衡方法即可轻松达到甚至超越现有公平GNN方法的性能。我们认为不同人口统计群体间的失衡是导致不公平的重要根源,这会造成各群体对参数更新的贡献不均衡。然而,这些简单的重平衡方法在训练过程中存在固有缺陷。本文提出FairGB——基于重平衡的公平图神经网络,通过群体平衡机制缓解GNN的不公平问题。技术上,FairGB包含两个模块:反事实节点混合与贡献对齐损失。首先,我们跨域跨类别选择反事实节点对,通过对自我网络进行插值来生成新样本。在理论分析的指导下,我们通过因果视角揭示模型的去偏机制,并证明该策略能使敏感属性与目标标签在统计上相互独立。其次,我们依据梯度信息重新调整各群体的贡献权重。通过结合这两个相互促进的模块,实验结果表明我们的方法在基准数据集上能同时实现最优的效用指标与公平性指标。代码发布于https://github.com/ZhixunLEE/FairGB。