Fair and trustworthy AI is becoming ever more important in both machine learning and legal domains. One important consequence is that decision makers must seek to guarantee a 'fair', i.e., non-discriminatory, algorithmic decision procedure. However, there are several competing notions of algorithmic fairness that have been shown to be mutually incompatible under realistic factual assumptions. This concerns, for example, the widely used fairness measures of 'calibration within groups' and 'balance for the positive/negative class'. In this paper, we present a novel algorithm (FAir Interpolation Method: FAIM) for continuously interpolating between these three fairness criteria. Thus, an initially unfair prediction can be remedied to, at least partially, meet a desired, weighted combination of the respective fairness conditions. We demonstrate the effectiveness of our algorithm when applied to synthetic data, the COMPAS data set, and a new, real-world data set from the e-commerce sector. Finally, we discuss to what extent FAIM can be harnessed to comply with conflicting legal obligations. The analysis suggests that it may operationalize duties in traditional legal fields, such as credit scoring and criminal justice proceedings, but also for the latest AI regulations put forth in the EU, like the Digital Markets Act and the recently enacted AI Act.
翻译:公平可信的人工智能在机器学习和法律领域正变得日益重要。一个关键后果是决策者必须寻求保证"公平"(即非歧视性)的算法决策流程。然而,存在多种相互竞争的算法公平概念,这些概念在现实假设下已被证明是互不相容的。例如,这涉及广泛使用的"组内校准"和"正/负类平衡"等公平性度量。本文提出了一种新颖算法(公平插值方法:FAIM),可在上述三种公平准则之间进行连续插值。通过这种方法,初始不公平的预测可以得到修正,至少部分满足各公平条件的加权组合需求。我们在合成数据、COMPAS数据集以及来自电子商务领域的新现实数据集上验证了该算法的有效性。最后,我们探讨了FAIM在多大程度上可用于满足相互冲突的法律义务。分析表明,该方法既可用于传统法律领域(如信用评分和刑事司法程序)的责任落实,也适用于欧盟最新的人工智能法规,如《数字市场法案》和近期颁布的《人工智能法案》。