Objective: This paper develops a theoretical framework explaining when and why AI explanations enhance versus impair human decision-making. Background: Transparency is advocated as universally beneficial for human-AI interaction, yet identical AI explanations improve decision quality in some contexts but impair it in others. Current theories--trust calibration, cognitive load, and self-determination--cannot fully account for this paradox. Method: The framework models autonomy as a continuous stochastic process influenced by information-induced cognitive load. Using stochastic control theory, autonomy evolution is formalized as geometric Brownian motion with information-dependent drift, and optimal transparency is derived via Hamilton-Jacobi-Bellman equations. Monte Carlo simulations validate theoretical predictions. Results: Mathematical analysis generates five testable predictions about disengagement timing, working memory moderation, autonomy trajectory shapes, and optimal information levels. Computational solutions demonstrate that dynamic transparency policies outperform both maximum and minimum transparency by adapting to real-time cognitive state. The optimal policy exhibits threshold structure: provide information when autonomy is high and accumulated load is low; withhold when resources are depleted. Conclusion: Transparency effects depend on dynamic cognitive resource depletion rather than static design choices. Information provision triggers metacognitive processing that reduces perceived control when cognitive load exceeds working memory capacity. Application: The framework provides design principles for adaptive AI systems: adjust transparency based on real-time cognitive state, implement information budgets respecting capacity limits, and personalize thresholds based on individual working memory capacity.
翻译:目的:本文构建了一个理论框架,用以解释人工智能解释在何时以及为何会增强或损害人类决策。背景:透明度被普遍认为是人机交互的有益因素,然而相同的人工智能解释在某些情境下能提升决策质量,在其他情境下却会损害决策质量。现有理论——信任校准、认知负荷与自我决定论——均无法完全解释这一悖论。方法:该框架将自主性建模为受信息诱发认知负荷影响的连续随机过程。运用随机控制理论,将自主性演化形式化为具有信息依赖漂移项的几何布朗运动,并通过哈密顿-雅可比-贝尔曼方程推导出最优透明度策略。蒙特卡洛模拟验证了理论预测。结果:数学分析得出五个可检验的预测,涉及脱离时机、工作记忆调节作用、自主性轨迹形态及最优信息水平。计算求解表明,动态透明度策略通过实时适应认知状态,其表现优于固定采用最高或最低透明度的策略。最优策略呈现阈值结构:当自主性高且累积负荷低时提供信息;当认知资源耗竭时则停止提供。结论:透明度效应取决于动态的认知资源耗竭过程,而非静态的设计选择。信息供给会触发元认知加工,当认知负荷超过工作记忆容量时将降低感知控制感。应用:该框架为自适应人工智能系统提供设计原则:依据实时认知状态调整透明度,实施尊重容量限制的信息预算机制,并根据个体工作记忆容量个性化设定阈值。