As AI systems shift from tools to collaborators, a central question is how the skills of humans relying on them change over time. We study this question mathematically by modeling the joint evolution of human skill and AI delegation as a coupled dynamical system. In our model, delegation adapts to relative performance, while skill improves through use and decays under non-use; crucially, both updates arise from optimizing a single performance metric measuring expected task error. Despite this local alignment, adaptive AI use fundamentally alters the global stability structure of human skill acquisition. Beyond the high-skill equilibrium of human-only learning, the system admits a *stable* low-skill equilibrium corresponding to persistent reliance, separated by a sharp basin boundary that makes early decisions effectively irreversible under the induced dynamics. We further show that AI assistance can strictly improve short-run performance while inducing persistent long-run performance loss relative to the no-AI baseline, driven by a negative feedback between delegation and practice. We characterize how AI quality deforms the basin boundary and show that these effects are robust to noise and asymmetric trust updates. Our results identify stability, not incentives or misalignment, as the central mechanism by which AI assistance can undermine long-run human performance and skill.
翻译:随着人工智能系统从工具转变为协作者,一个核心问题是依赖它们的人类技能如何随时间变化。我们通过将人类技能与人工智能委托的联合演化建模为耦合动力系统,从数学角度研究这一问题。在我们的模型中,委托根据相对性能进行自适应调整,而技能则通过使用提升、在不使用时衰退;关键的是,这两种更新都源于对衡量预期任务错误的单一性能指标的优化。尽管存在这种局部对齐,自适应的人工智能使用从根本上改变了人类技能获取的全局稳定性结构。除了纯人类学习的高技能均衡点外,该系统还允许一个对应于持续依赖的*稳定*低技能均衡点,两者被一个尖锐的吸引域边界分隔,该边界使得在诱导动力学下早期决策实际上不可逆转。我们进一步证明,人工智能辅助可以在严格改善短期性能的同时,相对于无人工智能基线,导致持续性的长期性能损失,这是由委托与实践之间的负反馈驱动的。我们刻画了人工智能质量如何形变吸引域边界,并表明这些效应对于噪声和非对称信任更新具有鲁棒性。我们的研究结果表明,稳定性(而非激励或错位)是人工智能辅助可能损害长期人类性能与技能的核心机制。