The widespread use of artificial intelligence (AI) systems across various domains is increasingly highlighting issues related to algorithmic fairness, especially in high-stakes scenarios. Thus, critical considerations of how fairness in AI systems might be improved, and what measures are available to aid this process, are overdue. Many researchers and policymakers see explainable AI (XAI) as a promising way to increase fairness in AI systems. However, there is a wide variety of XAI methods and fairness conceptions expressing different desiderata, and the precise connections between XAI and fairness remain largely nebulous. Besides, different measures to increase algorithmic fairness might be applicable at different points throughout an AI system's lifecycle. Yet, there currently is no coherent mapping of fairness desiderata along the AI lifecycle. In this paper, we set out to bridge both these gaps: We distill eight fairness desiderata, map them along the AI lifecycle, and discuss how XAI could help address each of them. We hope to provide orientation for practical applications and to inspire XAI research specifically focused on these fairness desiderata.
翻译:人工智能(AI)系统在各个领域的广泛应用,日益凸显出与算法公平性相关的问题,尤其是在高风险场景中。因此,关于如何改善AI系统中的公平性以及可用的辅助措施,已亟待深入反思。许多研究人员与政策制定者将可解释人工智能(XAI)视为提升AI系统公平性的可行途径。然而,现有XAI方法及公平性概念种类繁多,各自体现不同需求,且XAI与公平性之间的确切关联仍普遍模糊。此外,不同提升算法公平性的措施可能适用于AI系统生命周期的不同阶段。但目前尚缺乏沿AI生命周期对公平性需求进行系统梳理的研究。本文旨在弥合这两方面空白:我们提炼出八项公平性需求,将其映射于AI生命周期中,并探讨XAI如何助力满足每项需求。期望本研究能为实际应用提供指引,并激发聚焦于这些公平性需求的XAI研究。