We study the legal challenges in automated decision-making by analysing conventional algorithmic fairness approaches and their alignment with antidiscrimination law in the United Kingdom and other jurisdictions based on English common law. By translating principles of anti-discrimination law into a decision-theoretic framework, we formalise discrimination and propose a new, legally informed approach to developing systems for automated decision-making. Our investigation reveals that while algorithmic fairness approaches have adapted concepts from legal theory, they can conflict with legal standards, highlighting the importance of bridging the gap between automated decisions, fairness, and anti-discrimination doctrine.
翻译:本文通过分析传统算法公平性方法及其与英国和其他基于普通法司法管辖区反歧视法的契合度,研究了自动化决策中的法律挑战。通过将反歧视法原则转化为决策理论框架,我们对歧视进行了形式化定义,并提出了一种基于法律知识的新型自动化决策系统开发方法。研究表明,尽管算法公平性方法借鉴了法律理论的概念,但仍可能与法律标准产生冲突,这凸显了弥合自动化决策、公平性理论与反歧视法律原则之间差距的重要性。