Equalized odds, as a popular notion of algorithmic fairness, aims to ensure that sensitive variables, such as race and gender, do not unfairly influence the algorithm prediction when conditioning on the true outcome. Despite rapid advancements, most of the current research focuses on the violation of equalized odds caused by one sensitive attribute, leaving the challenge of simultaneously accounting for multiple attributes under-addressed. We address this gap by introducing a fairness learning approach that integrates adversarial learning with a novel inverse conditional permutation. This approach effectively and flexibly handles multiple sensitive attributes, potentially of mixed data types. The efficacy and flexibility of our method are demonstrated through both simulation studies and empirical analysis of real-world datasets.
翻译:等位几率作为算法公平性的流行概念,旨在确保在条件于真实结果时,种族和性别等敏感变量不会不公平地影响算法预测。尽管该领域发展迅速,但目前多数研究聚焦于单个敏感属性引发的等位几率违反问题,对同时处理多个敏感属性的挑战仍缺乏系统研究。为填补这一空白,我们提出了一种将对抗学习与新型逆条件置换相结合的公平学习方法。该方法能够有效且灵活地处理多个可能包含混合数据类型的敏感属性。通过模拟实验和真实数据集的实证分析,我们验证了该方法的有效性与灵活性。