The design of Graph Transformers (GTs) generally neglects considerations for fairness, resulting in biased outcomes against certain sensitive subgroups. Since GTs encode graph information without relying on message-passing mechanisms, conventional fairness-aware graph learning methods cannot be directly applicable to address these issues. To tackle this challenge, we propose FairGT, a Fairness-aware Graph Transformer explicitly crafted to mitigate fairness concerns inherent in GTs. FairGT incorporates a meticulous structural feature selection strategy and a multi-hop node feature integration method, ensuring independence of sensitive features and bolstering fairness considerations. These fairness-aware graph information encodings seamlessly integrate into the Transformer framework for downstream tasks. We also prove that the proposed fair structural topology encoding with adjacency matrix eigenvector selection and multi-hop integration are theoretically effective. Empirical evaluations conducted across five real-world datasets demonstrate FairGT's superiority in fairness metrics over existing graph transformers, graph neural networks, and state-of-the-art fairness-aware graph learning approaches.
翻译:图 Transformer(GT)的设计通常忽略对公平性的考量,导致针对特定敏感子群产生有偏结果。由于 GT 不依赖消息传递机制来编码图信息,传统的公平感知图学习方法无法直接适用解决这些问题。为应对这一挑战,我们提出 FairGT——一种专为缓解 GT 中固有公平性问题而设计的公平感知图 Transformer。FairGT 融合了精细的结构特征选择策略与多跳节点特征集成方法,确保敏感特征的独立性并增强公平性考量。这些公平感知的图信息编码无缝集成到 Transformer 框架中,以支持下游任务。我们还从理论上证明了所提出的基于邻接矩阵特征向量选择与多跳集成的公平结构拓扑编码的有效性。在五个真实世界数据集上进行的实证评估表明,FairGT 在公平性指标上优于现有图 Transformer、图神经网络以及最先进的公平感知图学习方法。