A cautious interpretation of AI regulations and policy in the EU and the USA place explainability as a central deliverable of compliant AI systems. However, from a technical perspective, explainable AI (XAI) remains an elusive and complex target where even state of the art methods often reach erroneous, misleading, and incomplete explanations. "Explainability" has multiple meanings which are often used interchangeably, and there are an even greater number of XAI methods - none of which presents a clear edge. Indeed, there are multiple failure modes for each XAI method, which require application-specific development and continuous evaluation. In this paper, we analyze legislative and policy developments in the United States and the European Union, such as the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the AI Act, the AI Liability Directive, and the General Data Protection Regulation (GDPR) from a right to explanation perspective. We argue that these AI regulations and current market conditions threaten effective AI governance and safety because the objective of trustworthy, accountable, and transparent AI is intrinsically linked to the questionable ability of AI operators to provide meaningful explanations. Unless governments explicitly tackle the issue of explainability through clear legislative and policy statements that take into account technical realities, AI governance risks becoming a vacuous "box-ticking" exercise where scientific standards are replaced with legalistic thresholds, providing only a false sense of security in XAI.
翻译:对欧盟和美国人工智能法规与政策的谨慎解读将可解释性定位为合规人工智能系统的核心交付成果。然而,从技术角度来看,可解释人工智能仍是一个难以捉摸的复杂目标,即使最先进的方法也常常产生错误、误导和不完整的解释。"可解释性"具有多种含义,这些含义常被混用,而可解释人工智能方法数量更为庞大——但没有任何一种方法展现出明显优势。实际上,每种可解释人工智能方法都存在多种失效模式,需要针对特定应用进行开发和持续评估。本文从解释权视角分析了美国和欧盟的立法与政策发展,例如《安全、可靠和可信赖的人工智能开发与使用行政令》、《人工智能法案》、《人工智能责任指令》以及《通用数据保护条例》。我们认为,这些人工智能法规和当前市场条件正在威胁有效的人工智能治理与安全性,因为可信、负责和透明的人工智能这一目标本质上与人工智能运营商提供有意义解释的能力存疑紧密相关。除非政府通过考虑技术现实情况的明确立法和政策声明来切实解决可解释性问题,否则人工智能治理有可能沦为空洞的"打勾"式操作——科学标准被法律性门槛取代,仅在可解释人工智能领域提供一种虚假的安全感。