What constitutes a fair decision? This question is not only difficult for humans but becomes more challenging when Artificial Intelligence (AI) models are used. In light of discriminatory algorithmic behaviors, the EU has recently passed the AI Act, which mandates specific rules for high-risk systems, incorporating both traditional legal non-discrimination regulations and machine learning based algorithmic fairness concepts. This paper aims to bridge these two different concepts in the AI Act through: First, a necessary high-level introduction of both concepts targeting legal and computer science-oriented scholars, and second, an in-depth analysis of the AI Act's relationship between legal non-discrimination regulations and algorithmic fairness. Our analysis reveals three key findings: (1.) Most non-discrimination regulations target only high-risk AI systems. (2.) The regulation of high-risk systems encompasses both data input requirements and output monitoring, though these regulations are partly inconsistent and raise questions of computational feasibility. (3.) Finally, we consider the possible (future) interaction of classical EU non-discrimination law and the AI Act regulations. We recommend developing more specific auditing and testing methodologies for AI systems. This paper aims to serve as a foundation for future interdisciplinary collaboration between legal scholars and computer science-oriented machine learning researchers studying discrimination in AI systems.
翻译:何为公平决策?这一问题不仅对人类而言难以回答,当人工智能模型参与决策时则更具挑战性。针对算法歧视行为,欧盟近期通过的《人工智能法案》对高风险系统制定了具体规则,其中既包含传统法律中的非歧视规定,也融入了基于机器学习的算法公平理念。本文旨在通过以下两个层面弥合《人工智能法案》中这两种不同概念之间的鸿沟:首先,针对法学与计算机科学领域学者,对两种概念进行必要的高层次介绍;其次,深入分析《人工智能法案》中法律非歧视规定与算法公平之间的关联。我们的分析得出三个关键发现:(1)大多数非歧视规定仅针对高风险人工智能系统。(2)高风险系统的监管同时涵盖数据输入要求与输出监控,但这些规定存在部分矛盾,且引发计算可行性的质疑。(3)最后,我们探讨了传统欧盟非歧视法律与《人工智能法案》规定可能(未来)产生的交互影响。我们建议为人工智能系统开发更具体的审计与测试方法。本文旨在为法学学者与计算机科学导向的机器学习研究者未来在人工智能系统歧视研究方面的跨学科合作奠定基础。