Current fairness metrics and mitigation techniques provide tools for practitioners to asses how non-discriminatory Automatic Decision Making (ADM) systems are. What if I, as an individual facing a decision taken by an ADM system, would like to know: Am I being treated fairly? We explore how to create the affordance for users to be able to ask this question of ADM. In this paper, we argue for the reification of fairness not only as a property of ADM, but also as an epistemic right of an individual to acquire information about the decisions that affect them and use that information to contest and seek effective redress against those decisions, in case they are proven to be discriminatory. We examine key concepts from existing research not only in algorithmic fairness but also in explainable artificial intelligence, accountability, and contestability. Integrating notions from these domains, we propose a conceptual framework to ascertain fairness by combining different tools that empower the end-users of ADM systems. Our framework shifts the focus from technical solutions aimed at practitioners to mechanisms that enable individuals to understand, challenge, and verify the fairness of decisions, and also serves as a blueprint for organizations and policymakers, bridging the gap between technical requirements and practical, user-centered accountability.
翻译:当前的公平性度量和缓解技术为从业者提供了评估自动决策系统非歧视性的工具。但作为面临自动决策系统裁决的个体,如果我想知道:我是否受到公平对待?我们探讨如何为用户创造提出这一问题的可能性。本文主张将公平性具体化,不仅作为自动决策系统的属性,更作为个体获取影响其决策的信息并利用该信息质疑和寻求有效救济的认知权利——当这些决策被证明存在歧视时。我们审视了现有研究中的关键概念,不仅涵盖算法公平性领域,还包括可解释人工智能、问责制与可争议性。通过整合这些领域的理念,我们提出一个结合多种工具的概念框架,以增强自动决策系统终端用户的能力。该框架将重点从面向从业者的技术解决方案转向使个体能够理解、质疑和验证决策公平性的机制,同时为组织和政策制定者提供蓝图,弥合技术需求与实用、以用户为中心的问责制之间的差距。