The widespread use of artificial intelligence (AI) systems across various domains is increasingly surfacing issues related to algorithmic fairness, especially in high-stakes scenarios. Thus, critical considerations of how fairness in AI systems might be improved -- and what measures are available to aid this process -- are overdue. Many researchers and policymakers see explainable AI (XAI) as a promising way to increase fairness in AI systems. However, there is a wide variety of XAI methods and fairness conceptions expressing different desiderata, and the precise connections between XAI and fairness remain largely nebulous. Besides, different measures to increase algorithmic fairness might be applicable at different points throughout an AI system's lifecycle. Yet, there currently is no coherent mapping of fairness desiderata along the AI lifecycle. In this paper, we we distill eight fairness desiderata, map them along the AI lifecycle, and discuss how XAI could help address each of them. We hope to provide orientation for practical applications and to inspire XAI research specifically focused on these fairness desiderata.
翻译:人工智能(AI)系统在各领域的广泛应用日益凸显出算法公平性问题,尤其是在高风险场景中。因此,关于如何提升AI系统公平性——以及有哪些措施可辅助这一过程——的关键性探讨已刻不容缓。许多研究人员与政策制定者将可解释人工智能(XAI)视为提升AI系统公平性的有效途径。然而,XAI方法种类繁多,公平性概念亦包含多种不同的期望目标,且XAI与公平性之间的具体关联仍大多模糊不清。此外,提升算法公平性的不同措施可能适用于AI系统生命周期的不同阶段。但目前尚缺乏沿AI生命周期对公平性期望目标的系统性映射。本文提炼出八项公平性期望目标,将其映射至AI生命周期各阶段,并探讨XAI如何助力实现每项目标。我们期望为实际应用提供方向指引,并激发针对这些公平性期望目标的专项XAI研究。