Motivated by the recital (67) of the current corrigendum of the AI Act in the European Union, we propose and present measures and mitigation strategies for discrimination in tabular datasets. We specifically focus on datasets that contain multiple protected attributes, such as nationality, age, and sex. This makes measuring and mitigating bias more challenging, as many existing methods are designed for a single protected attribute. This paper comes with a twofold contribution: Firstly, new discrimination measures are introduced. These measures are categorized in our framework along with existing ones, guiding researchers and practitioners in choosing the right measure to assess the fairness of the underlying dataset. Secondly, a novel application of an existing bias mitigation method, FairDo, is presented. We show that this strategy can mitigate any type of discrimination, including intersectional discrimination, by transforming the dataset. By conducting experiments on real-world datasets (Adult, Bank, COMPAS), we demonstrate that de-biasing datasets with multiple protected attributes is possible. All transformed datasets show a reduction in discrimination, on average by 28%. Further, these datasets do not compromise any of the tested machine learning models' performances significantly compared to the original datasets. Conclusively, this study demonstrates the effectiveness of the mitigation strategy used and contributes to the ongoing discussion on the implementation of the European Union's AI Act.
翻译:受欧盟《人工智能法案》现行勘误表第67条陈述的启发,本文针对表格数据集中的歧视问题提出并呈现了度量方法与缓解策略。我们特别关注包含多个保护属性(如国籍、年龄和性别)的数据集。这使得偏见的度量与缓解更具挑战性,因为现有方法大多仅针对单一保护属性设计。本文作出双重贡献:首先,引入了新的歧视度量指标。这些指标与现有指标共同纳入我们的分类框架,为研究者和实践者选择合适指标以评估基础数据集的公平性提供指导。其次,提出了一种现有偏见缓解方法FairDo的创新应用。我们证明该策略能通过转换数据集来缓解任何类型的歧视,包括交叉歧视。通过在真实数据集(Adult、Bank、COMPAS)上进行实验,我们证明了针对多保护属性数据集进行去偏是可行的。所有转换后的数据集均显示歧视程度降低,平均降幅达28%。此外,与原始数据集相比,这些数据集均未显著影响任何测试机器学习模型的性能。综上所述,本研究证明了所用缓解策略的有效性,并为欧盟《人工智能法案》的实施讨论提供了参考。