The EU AI Act makes explainability urgent for high-risk AI systems, yet most XAI research focuses on technical metrics rather than regulatory compliance. Understanding how legal requirements reshape XAI method design is challenging: the AI Act regulates organizational relationships (providers, deployers) using legal terminology, specifies obligations without concrete technical requirements, and underrepresents end-users--the very stakeholders whose needs human-centered XAI addresses. As regulations emerge globally, human-centered XAI practitioners face both a challenge and an opportunity: regulations pull XAI research toward real-world deployment, while practitioners can actively shape how explainability enables compliance. This establishes a bidirectional relationship. Our contribution is threefold. First, we provide the first interdisciplinary analysis of XAI's role in the AI Act--conducted by a team comprising AI Act legal experts, ML engineers, and requirements engineers--on a real-world clinical decision support system. Second, we systematically align XAI stakeholder roles with AI Act legal responsibilities, revealing where explainability methods address regulatory requirements versus where additional measures are necessary. Third, we identify three key opportunities for human-centered XAI practitioners: actively defining their roles in regulatory implementation; making the user-to-affected-party relationship explicit where regulations address only provider-deployer obligations; and enabling compliance while building multi-level trust--from regulators to affected parties.
翻译:欧盟《人工智能法案》使得高风险人工智能系统的可解释性变得紧迫,然而大多数可解释人工智能研究聚焦于技术指标而非法规遵从。理解法律要求如何重塑可解释人工智能方法设计具有挑战性:《人工智能法案》使用法律术语规范组织关系(提供者、部署者),在缺乏具体技术要求的情况下明确义务,且未能充分代表终端用户——而这正是以人为本的可解释人工智能所关注的核心利益相关者。随着全球范围内法规的涌现,以人为本的可解释人工智能实践者既面临挑战也迎来机遇:法规将可解释人工智能研究推向实际部署,而实践者能主动塑造可解释性如何实现合规。这确立了一种双向关系。我们的贡献包含三方面。首先,我们首次对可解释人工智能在《人工智能法案》中的作用进行跨学科分析——该分析由包含《人工智能法案》法律专家、机器学习工程师和需求工程师的团队共同完成,并基于真实世界的临床决策支持系统展开。其次,我们系统性地将可解释人工智能利益相关者角色与《人工智能法案》法律责任对齐,揭示可解释性方法在何处满足监管要求、在何处需要额外措施。第三,我们识别出以人为本的可解释人工智能实践者的三个关键机遇:在法规实施中主动定义自身角色;在法规仅涉及提供者-部署者义务时,明确用户与受影响方的关系;在建立多层次信任(从监管机构到受影响方)的同时实现合规。