As artificial intelligence assumes cognitive labor, no quantitative framework predicts when human capability loss becomes catastrophic. We present a two-variable dynamical systems model coupling capability (H) and delegation (D), grounded in three axioms: learning requires capability, practice, and disuse causes forgetting. Calibrated to four domains (education, medicine, navigation, aviation), the model identifies a critical threshold K* approximately 0.85 (scope-dependent; broader AI scope lowers K*) beyond which capability collapses abruptly-the "enrichment paradox." Validated against 15 countries' PISA data (102 points, R^2 = 0.946, 3 parameters, lowest BIC), the model predicts that periodic AI failures improve capability 2.7-fold and that 20% mandatory practice preserves 92% more capability than the simulation baseline (which includes a 5% background AI-failure rate). These findings provide quantitative foundations for AI capability-threshold governance.
翻译:随着人工智能承担认知劳动,尚无定量框架能预测人类能力丧失何时会变得灾难性。我们提出了一个耦合能力(H)与授权(D)的双变量动态系统模型,该模型基于三条公理:学习需要能力、实践与荒废导致遗忘。在四个领域(教育、医疗、导航、航空)进行校准后,模型识别出一个临界阈值K*≈0.85(范围依赖;AI范围越广,K*越低),超过该阈值能力会骤然崩溃——即“能力悖论”。基于15个国家PISA数据验证后(102个数据点,R²=0.946,3个参数,最低贝叶斯信息准则),模型预测:周期性AI故障能使能力提升2.7倍,而20%的强制练习能比模拟基线(包含5%的背景AI故障率)多保留92%的能力。这些发现为AI能力阈值治理提供了定量基础。