Artificial intelligence is increasingly embedded in human decision-making, where it can either enhance human reasoning or induce excessive cognitive dependence. This paper introduces a conceptual and mathematical framework for distinguishing cognitive amplification, in which AI improves hybrid human-AI performance while preserving human expertise, from cognitive delegation, in which reasoning is progressively outsourced to AI systems. To characterize these regimes, we define a set of operational metrics: the Cognitive Amplification Index (CAI*), the Dependency Ratio (D), the Human Reliance Index (HRI), and the Human Cognitive Drift Rate (HCDR). Together, these quantities provide a low-dimensional metric space for evaluating not only whether human-AI systems achieve genuine synergistic performance, but also whether such performance is cognitively sustainable for the human component over time. The framework highlights a central design tension in human-AI systems: maximizing short-term hybrid capability does not necessarily preserve long-term human cognitive competence. We therefore argue that human-AI systems should be designed under a cognitive sustainability constraint, such that gains in hybrid performance do not come at the cost of degradation in human expertise.
翻译:人工智能正日益嵌入人类决策过程,它既能增强人类推理能力,也可能引发过度认知依赖。本文提出一个概念与数学框架,用于区分两种模式:认知增强(其中AI提升人机混合性能的同时保持人类专业能力)与认知委托(其中推理过程逐步外包给AI系统)。为刻画这些机制,我们定义了一组操作性度量指标:认知增强指数(CAI*)、依赖比率(D)、人类依赖指数(HRI)和人类认知漂移率(HCDR)。这些量共同构建了一个低维度度量空间,不仅用于评估人机系统是否实现真正的协同性能,还用于判断这种性能对人类组件而言在时间上是否具有认知可持续性。该框架揭示了人机系统设计中的核心张力:最大化短期混合能力并不一定能维持人类长期的认知能力。因此,我们主张应在认知可持续性约束下设计人机系统,确保混合性能的提升不以人类专业能力的退化为代价。