A cautious interpretation of AI regulations and policy in the EU and the USA place explainability as a central deliverable of compliant AI systems. However, from a technical perspective, explainable AI (XAI) remains an elusive and complex target where even state of the art methods often reach erroneous, misleading, and incomplete explanations. "Explainability" has multiple meanings which are often used interchangeably, and there are an even greater number of XAI methods - none of which presents a clear edge. Indeed, there are multiple failure modes for each XAI method, which require application-specific development and continuous evaluation. In this paper, we analyze legislative and policy developments in the United States and the European Union, such as the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the AI Act, the AI Liability Directive, and the General Data Protection Regulation (GDPR) from a right to explanation perspective. We argue that these AI regulations and current market conditions threaten effective AI governance and safety because the objective of trustworthy, accountable, and transparent AI is intrinsically linked to the questionable ability of AI operators to provide meaningful explanations. Unless governments explicitly tackle the issue of explainability through clear legislative and policy statements that take into account technical realities, AI governance risks becoming a vacuous "box-ticking" exercise where scientific standards are replaced with legalistic thresholds, providing only a false sense of security in XAI.
翻译:对欧盟和美国人工智能法规与政策的谨慎解读,将可解释性视为合规AI系统的核心交付成果。然而,从技术角度看,可解释人工智能(XAI)仍是一个难以捉摸且复杂的挑战,即使最先进的方法也常常产生错误、误导性和不完整的解释。“可解释性”具有多重含义且常被混用,而XAI方法数量更为庞大——其中没有任何一种具备明确优势。事实上,每种XAI方法都存在多种失效模式,需要针对特定应用进行开发和持续评估。本文从解释权视角分析了美国和欧盟的立法与政策发展,例如《关于安全、可靠和可信赖地开发与使用人工智能的行政命令》、《人工智能法案》、《人工智能责任指令》及《通用数据保护条例》(GDPR)。我们认为,这些AI法规与当前市场条件正在威胁有效的AI治理与安全性,因为可信、可问责且透明的AI目标本质上有赖于AI运营者提供有意义解释的能力——这种能力本身存疑。除非政府通过考虑技术现实的明确立法与政策声明切实解决可解释性问题,否则AI治理将沦为空洞的“勾选框”式操作,以法律主义门槛取代科学标准,最终仅在XAI领域制造虚假的安全感。