Intersectional fairness is a critical requirement for Machine Learning (ML) software, demanding fairness across subgroups defined by multiple protected attributes. This paper introduces FairHOME, a novel ensemble approach using higher order mutation of inputs to enhance intersectional fairness of ML software during the inference phase. Inspired by social science theories highlighting the benefits of diversity, FairHOME generates mutants representing diverse subgroups for each input instance, thus broadening the array of perspectives to foster a fairer decision-making process. Unlike conventional ensemble methods that combine predictions made by different models, FairHOME combines predictions for the original input and its mutants, all generated by the same ML model, to reach a final decision. Notably, FairHOME is even applicable to deployed ML software as it bypasses the need for training new models. We extensively evaluate FairHOME against seven state-of-the-art fairness improvement methods across 24 decision-making tasks using widely adopted metrics. FairHOME consistently outperforms existing methods across all metrics considered. On average, it enhances intersectional fairness by 47.5%, surpassing the currently best-performing method by 9.6 percentage points.
翻译:交叉公平性是机器学习(ML)软件的关键要求,它要求在由多个受保护属性定义的子群体间实现公平性。本文提出FairHOME,一种新颖的集成方法,通过在推理阶段利用输入的高阶变异来增强机器学习软件的交叉公平性。受社会科学理论强调多样性益处的启发,FairHOME为每个输入实例生成代表不同子群体的变异体,从而拓宽视角范围以促进更公平的决策过程。与传统集成方法组合不同模型的预测不同,FairHOME组合同一ML模型对原始输入及其变异体生成的预测,以达成最终决策。值得注意的是,FairHOME甚至适用于已部署的ML软件,因为它无需训练新模型。我们使用广泛采用的指标,在24个决策任务中将FairHOME与七种最先进的公平性改进方法进行全面评估。FairHOME在所有考量指标上均持续优于现有方法。平均而言,它将交叉公平性提升了47.5%,较当前最佳方法高出9.6个百分点。