The EU AI Act makes explainability urgent for high-risk AI systems, yet most XAI research focuses on technical metrics rather than regulatory compliance. Understanding how legal requirements reshape XAI method design is challenging: the AI Act regulates organizational relationships (providers, deployers) using legal terminology, specifies obligations without concrete technical requirements, and underrepresents end-users--the very stakeholders whose needs human-centered XAI addresses. As regulations emerge globally, human-centered XAI practitioners face both a challenge and an opportunity: regulations pull XAI research toward real-world deployment, while practitioners can actively shape how explainability enables compliance. This establishes a bidirectional relationship. Our contribution is threefold. First, we provide the first interdisciplinary analysis of XAI's role in the AI Act--conducted by a team comprising AI Act legal experts, ML engineers, and requirements engineers--on a real-world clinical decision support system. Second, we systematically align XAI stakeholder roles with AI Act legal responsibilities, revealing where explainability methods address regulatory requirements versus where additional measures are necessary. Third, we identify three key opportunities for human-centered XAI practitioners: actively defining their roles in regulatory implementation; making the user-to-affected-party relationship explicit where regulations address only provider-deployer obligations; and enabling compliance while building multi-level trust--from regulators to affected parties.
翻译:暂无翻译