Equalized odds, as a popular notion of algorithmic fairness, aims to ensure that sensitive variables, such as race and gender, do not unfairly influence the algorithm's prediction when conditioning on the true outcome. Despite rapid advancements, current research primarily focuses on equalized odds violations caused by a single sensitive attribute, leaving the challenge of simultaneously accounting for multiple attributes largely unaddressed. We bridge this gap by introducing an in-processing fairness-aware learning approach, FairICP, which integrates adversarial learning with a novel inverse conditional permutation scheme. FairICP offers a theoretically justified, flexible, and efficient scheme to promote equalized odds under fairness conditions described by complex and multidimensional sensitive attributes. The efficacy and adaptability of our method are demonstrated through both simulation studies and empirical analyses of real-world datasets.
翻译:均衡几率作为一种流行的算法公平性概念,旨在确保敏感变量(如种族和性别)在给定真实结果的条件下不会对算法预测产生不公平影响。尽管相关研究进展迅速,但当前工作主要关注由单一敏感属性导致的均衡几率违反问题,对同时处理多个属性的挑战尚未充分解决。本文通过提出一种处理中公平性感知学习方法 FairICP 来弥补这一空白,该方法将对抗性学习与新颖的逆条件置换方案相结合。FairICP 提供了一个理论依据充分、灵活且高效的方案,能够在由复杂多维敏感属性描述的公平性条件下促进均衡几率的实现。我们通过仿真研究和对真实世界数据集的实证分析,证明了该方法的有效性和适应性。