Recent studies have highlighted significant fairness issues in Graph Transformer (GT) models, particularly against subgroups defined by sensitive features. Additionally, GTs are computationally intensive and memory-demanding, limiting their application to large-scale graphs. Our experiments demonstrate that graph partitioning can enhance the fairness of GT models while reducing computational complexity. To understand this improvement, we conducted a theoretical investigation into the root causes of fairness issues in GT models. We found that the sensitive features of higher-order nodes disproportionately influence lower-order nodes, resulting in sensitive feature bias. We propose Fairness-aware scalable GT based on Graph Partitioning (FairGP), which partitions the graph to minimize the negative impact of higher-order nodes. By optimizing attention mechanisms, FairGP mitigates the bias introduced by global attention, thereby enhancing fairness. Extensive empirical evaluations on six real-world datasets validate the superior performance of FairGP in achieving fairness compared to state-of-the-art methods. The codes are available at https://github.com/LuoRenqiang/FairGP.
翻译:近期研究揭示了图Transformer(GT)模型在公平性方面存在的显著问题,尤其针对由敏感特征定义的子群体。此外,GT模型计算密集且内存需求高,限制了其在大规模图上的应用。我们的实验表明,图划分能够在降低计算复杂度的同时提升GT模型的公平性。为理解这一改进,我们对GT模型中公平性问题的根源进行了理论探究。我们发现高阶节点的敏感特征对低阶节点产生了不成比例的影响,从而导致敏感特征偏差。我们提出了基于图划分的公平性感知可扩展GT模型(FairGP),该方法通过划分图来最小化高阶节点的负面影响。通过优化注意力机制,FairGP缓解了全局注意力引入的偏差,从而提升了公平性。在六个真实数据集上的大量实证评估表明,相较于现有先进方法,FairGP在实现公平性方面具有优越性能。代码公开于 https://github.com/LuoRenqiang/FairGP。