We investigate the fairness issue in classification, where automated decisions are made for individuals from different protected groups. In high-consequence scenarios, decision errors can disproportionately affect certain protected groups, leading to unfair outcomes. To address this issue, we propose a fairness-adjusted selective inference (FASI) framework and develop data-driven algorithms that achieve statistical parity by controlling the false selection rate (FSR) among protected groups. Our FASI algorithm operates by converting the outputs of black-box classifiers into R-values, which are both intuitive and computationally efficient. These R-values serve as the basis for selection rules that are provably valid for FSR control in finite samples for protected groups, effectively mitigating the unfairness in group-wise error rates. We demonstrate the numerical performance of our approach using both simulated and real data.
翻译:本文研究了分类中的公平性问题,即针对来自不同受保护群体的个体进行自动化决策的场景。在高风险决策情境中,分类错误可能对某些受保护群体造成不成比例的影响,从而导致不公平的结果。为解决这一问题,我们提出了一个公平性调整的选择性推断(FASI)框架,并开发了数据驱动算法,通过控制受保护群体间的错误选择率(FSR)来实现统计公平性。我们的FASI算法通过将黑盒分类器的输出转换为R值来运作,该方法既直观又计算高效。这些R值构成了选择规则的基础,这些规则在有限样本下对受保护群体的FSR控制具有可证明的有效性,从而有效缓解了群体间错误率的不公平性。我们通过模拟数据和真实数据展示了所提出方法的数值性能。