People's experiences of discrimination are often shaped by multiple intersecting factors, yet algorithmic fairness research rarely reflects this complexity. While intersectionality offers tools for understanding how forms of oppression interact, current approaches to intersectional algorithmic fairness tend to focus on narrowly defined demographic subgroups. These methods contribute important insights but risk oversimplifying social reality and neglecting structural inequalities. In this paper, we outline how a substantive approach to intersectional algorithmic fairness can reorient this research and practice. In particular, we propose Substantive Intersectional Algorithmic Fairness, extending Green's (2022) notion of substantive algorithmic fairness with insights from intersectional feminist theory. Aiming to provide as actionable guidance as possible, our approach is articulated as ten desiderata to guide the design, assessment, and deployment of algorithmic systems that address systemic inequities while mitigating harms to intersectionally marginalized communities. Rather than prescribing fixed operationalizations, these desiderata invite AI practitioners and experts to reflect on assumptions of neutrality, the use of protected attributes, the inclusion of multiply marginalized groups, and the transformative potential of algorithmic systems. By bridging computational and social science perspectives, the approach emphasizes that fairness cannot be separated from social context, and that in some cases, principled non-deployment may be necessary.
翻译:人们遭受歧视的经历往往由多种交叉因素共同塑造,然而算法公平性研究很少反映这种复杂性。尽管交叉性理论为理解压迫形式如何相互作用提供了分析工具,但当前交叉算法公平性的研究方法往往局限于狭义定义的人口统计子群体。这些方法提供了重要见解,但存在过度简化社会现实和忽视结构性不平等的风险。本文阐述了采用实质性方法研究交叉算法公平性如何能重新定位该领域的研究与实践。具体而言,我们提出"实质性交叉算法公平性"框架,通过融合交叉女性主义理论的洞见,拓展了Green(2022)提出的实质性算法公平性概念。为提供尽可能可操作的指导,我们将该方法阐述为十条理想标准,用以指导算法系统的设计、评估与部署,旨在解决系统性不公的同时减轻对交叉边缘化群体的伤害。这些标准并非规定固定的操作化方案,而是邀请人工智能从业者和专家反思以下问题:中立性假设、受保护属性的使用、多重边缘化群体的纳入,以及算法系统的变革潜力。通过桥接计算科学与社会科学的视角,该方法强调公平性不能脱离社会背景,且在特定情况下,基于原则的不部署可能是必要的。