The topic of fairness in AI, as debated in the FATE (Fairness, Accountability, Transparency, and Ethics in AI) communities, has sparked meaningful discussions in the past years. However, from a legal perspective, particularly from the perspective of European Union law, many open questions remain. Whereas algorithmic fairness aims to mitigate structural inequalities at design-level, European non-discrimination law is tailored to individual cases of discrimination after an AI model has been deployed. The AI Act might present a tremendous step towards bridging these two approaches by shifting non-discrimination responsibilities into the design stage of AI models. Based on an integrative reading of the AI Act, we comment on legal as well as technical enforcement problems and propose practical implications on bias detection and bias correction in order to specify and comply with specific technical requirements.
翻译:人工智能公平性这一主题,正如FATE(人工智能领域的公平性、问责制、透明性与伦理)社群所探讨的,在过去几年引发了深刻讨论。然而,从法律视角,特别是欧盟法律的角度来看,仍存在许多悬而未决的问题。算法公平性旨在设计层面缓解结构性不平等,而欧洲非歧视法则针对人工智能模型部署后出现的个别歧视案例。《人工智能法案》通过将非歧视责任前移至人工智能模型的设计阶段,可能为弥合这两种路径迈出重要一步。基于对《人工智能法案》的整体性解读,我们评述了法律与技术执行层面的问题,并就偏差检测与偏差修正提出实践建议,以明确并符合具体技术要求。