We investigate the fairness issue in classification, where automated decisions are made for individuals from different protected groups. In high-consequence scenarios, decision errors can disproportionately affect certain protected groups, leading to unfair outcomes. To address this issue, we propose a fairness-adjusted selective inference (FASI) framework and develop data-driven algorithms that achieve statistical parity by controlling the false selection rate (FSR) among protected groups. Our FASI algorithm operates by converting the outputs of black-box classifiers into R-values, which are both intuitive and computationally efficient. These R-values serve as the basis for selection rules that are provably valid for FSR control in finite samples for protected groups, effectively mitigating the unfairness in group-wise error rates. We demonstrate the numerical performance of our approach using both simulated and real data.
翻译:本文研究分类中的公平性问题,即针对来自不同受保护群体的个体进行自动化决策的场景。在高风险情境中,决策错误可能对特定受保护群体造成不成比例的影响,从而导致不公平的结果。为解决该问题,我们提出了一种公平性调整的选择性推断框架,并开发了数据驱动的算法,通过控制受保护群体间的错误选择率来实现统计公平性。我们的FASI算法通过将黑盒分类器的输出转换为兼具直观性与计算效率的R值来运作。这些R值构成了选择规则的基础,该规则在有限样本下对受保护群体的FSR控制具有可证明的有效性,从而显著缓解了群体间错误率的不公平性。我们通过模拟数据和真实数据验证了所提方法的数值性能。