The widespread use of artificial intelligence (AI) systems across various domains is increasingly highlighting issues related to algorithmic fairness, especially in high-stakes scenarios. Thus, critical considerations of how fairness in AI systems might be improved, and what measures are available to aid this process, are overdue. Many researchers and policymakers see explainable AI (XAI) as a promising way to increase fairness in AI systems. However, there is a wide variety of XAI methods and fairness conceptions expressing different desiderata, and the precise connections between XAI and fairness remain largely nebulous. Besides, different measures to increase algorithmic fairness might be applicable at different points throughout an AI system's lifecycle. Yet, there currently is no coherent mapping of fairness desiderata along the AI lifecycle. In this paper, we set out to bridge both these gaps: We distill eight fairness desiderata, map them along the AI lifecycle, and discuss how XAI could help address each of them. We hope to provide orientation for practical applications and to inspire XAI research specifically focused on these fairness desiderata.
翻译:人工智能(AI)系统在各领域的广泛应用日益凸显算法公平性问题,尤其在高风险场景中尤为突出。因此,亟需深入思考如何提升AI系统的公平性,以及可采取哪些措施来推动这一进程。众多研究人员和政策制定者将可解释人工智能(XAI)视为增强AI系统公平性的有前景途径。然而,XAI方法与公平性理念的多样性表达出不同的期望目标,两者之间的精确关联仍普遍模糊。此外,提升算法公平性的不同措施可能适用于AI系统生命周期的不同阶段。但目前仍缺乏对这一生命周期中公平性期望目标的连贯性映射。本文旨在弥合上述双重空白:我们提炼出八项公平性期望目标,将其映射至AI生命周期各阶段,并探讨XAI如何助力实现每项目标。我们期望为实际应用提供指引,并激发专注于这些公平性期望目标的XAI研究。