The proliferation of Generative Artificial Intelligence has transformed benign cognitive offloading into a systemic risk of cognitive agency surrender. Driven by the commercial dogma of "zero-friction" design, highly fluent AI interfaces actively exploit human cognitive miserliness, prematurely satisfying the need for cognitive closure and inducing severe automation bias. To empirically quantify this epistemic erosion, we deployed a zero-shot semantic classification pipeline ($τ=0.7$) on 1,223 high-confidence AI-HCI papers from 2023 to early 2026. Our analysis reveals an escalating "agentic takeover": a brief 2025 surge in research defending human epistemic sovereignty (19.1%) was abruptly suppressed in early 2026 (13.1%) by an explosive shift toward optimizing autonomous machine agents (19.6%), while frictionless usability maintained a structural hegemony (67.3%). To dismantle this trap, we theorize "Scaffolded Cognitive Friction," repurposing Multi-Agent Systems (MAS) as explicit cognitive forcing functions (e.g., computational Devil's Advocates) to inject germane epistemic tension and disrupt heuristic execution. Furthermore, we outline a multimodal computational phenotyping agenda -- integrating gaze transition entropy, task-evoked pupillometry, fNIRS, and Hierarchical Drift Diffusion Modeling (HDDM) -- to mathematically decouple decision outcomes from cognitive effort. Ultimately, intentionally designed friction is not merely a psychological intervention, but a foundational technical prerequisite for enforcing global AI governance and preserving societal cognitive resilience.
翻译:生成式人工智能的泛滥已将良性的认知卸载转变为认知主权让渡的系统性风险。在"零摩擦"设计的商业教条驱动下,高度流畅的AI接口主动利用人类的认知懒惰,过早满足认知闭合需求,并诱发严重的自动化偏见。为实证量化这种认知侵蚀,我们部署了一个零样本语义分类流水线(τ=0.7),对2023年至2026年初的1,223篇高置信度AI-HCI论文进行了分析。分析揭示出不断加剧的"智能代理接管":2025年初短暂涌现的捍卫人类认知主权研究(19.1%)在2026年初被急剧抑制(13.1%),取而代之的是向优化自主机器代理方向的爆炸性转变(19.6%),而零摩擦可用性则维持着结构性霸权(67.3%)。为破解这一陷阱,我们提出了"支架式认知摩擦"理论,将多智能体系统(MAS)重新设计为显式认知强制函数(例如计算性魔鬼代言人),以注入相关认知张力并打断启发式执行。此外,我们勾勒了一个多模态计算表型分析框架——整合视线转移熵、任务诱发瞳孔测量、功能性近红外光谱(fNIRS)和层级漂移扩散模型(HDDM)——以数学方式解耦决策结果与认知努力。最终,有意设计的摩擦不仅是一种心理干预,更是实施全球AI治理和维护社会认知韧性的基础性技术前提。