Current discourse on Artificial Intelligence (AI) ethics, dominated by "trustworthy" and "responsible" AI, overlooks a more fundamental human-computer interaction (HCI) crisis: the erosion of human agency. This paper argues that the primary challenge of high-stakes AI systems is not trust, but the preservation of human causal control. We posit that "bad AI" will function as "bad UI," a metaphor for catastrophic interface failures that misrepresent system state and lead to human error. Applying Marshall McLuhan's media theory, AI can be framed as a technology of "augmentation" that simultaneously "amputates" the user's direct perception of causality. This places the interface as the critical locus where a "double uncertainty"--that of the human user and that of the probabilistic model--must be mediated. We critique current Explainable AI (XAI) for its correlational focus and failure to represent uncertainty. We conclude by proposing a rigorous, nested Causal-Agency Framework (CAF) that integrates causal models, uncertainty quantification, and human-centered evaluation to restore agency at the interface.
翻译:当前以“可信”与“负责任”为主导的人工智能(AI)伦理讨论,忽略了一个更为根本的人机交互(HCI)危机:人类能动性的侵蚀。本文指出,高风险AI系统的首要挑战并非信任问题,而是对人类因果控制能力的维护。我们提出,“糟糕的AI”将表现为“糟糕的用户界面”(bad UI),这一隐喻揭示了因界面灾难性故障而扭曲系统状态、最终导致人为错误的本质。借助马歇尔·麦克卢汉的媒介理论,AI可被理解为一种“增强”技术,它同时“截除”了用户对因果关系的直接感知。这使得界面成为关键场域,在此处必须调和双重不确定性——人类用户的不确定性与概率模型的不确定性。我们批评当前的可解释性人工智能(XAI)因其相关性导向及未能表征不确定性而存在缺陷。最终,我们提出一种严谨、嵌套的因果-能动性框架(CAF),该框架整合了因果模型、不确定性量化及以人为中心的评估方法,以在界面层面恢复能动性。