As AI tutors enter classrooms at unprecedented speed, their deployment increasingly outpaces our grasp of the psychological and social consequences of such technology. Yet decades of research in automation psychology, human factors, and human-computer interaction provide crucial insights that remain underutilized in educational AI design. This work synthesizes four research traditions -- automation psychology, human factors engineering, HCI, and philosophy of technology -- to establish a comprehensive framework for understanding how learners psychologically relate to anthropomorphic AI tutors. We identify three persistent challenges intensified by Generative AI's conversational fluency. First, learners exhibit dual trust calibration failures -- automation bias (uncritical acceptance) and algorithm aversion (excessive rejection after errors) -- with an expertise paradox where novices overrely while experts underrely. Second, while anthropomorphic design enhances engagement, it can distract from learning and foster harmful emotional attachment. Third, automation ironies persist: systems meant to aid cognition introduce designer errors, degrade skills through disuse, and create monitoring burdens humans perform poorly. We ground this theoretical synthesis through comparative analysis of over 104,984 YouTube comments across AI-generated philosophical debates and human-created engineering tutorials, revealing domain-dependent trust patterns and strong anthropomorphic projection despite minimal cues. For engineering education, our synthesis mandates differentiated approaches: AI tutoring for technical foundations where automation bias is manageable through proper scaffolding, but human facilitation for design, ethics, and professional judgment where tacit knowledge transmission proves irreplaceable.
翻译:随着人工智能导师以前所未有的速度进入课堂,其部署速度日益超越我们对这类技术心理与社会影响的认知。然而自动化心理学、人因工程学、人机交互领域数十年的研究提供了关键见解,这些见解在教育人工智能设计中仍未得到充分利用。本研究综合了四个研究传统——自动化心理学、人因工程学、人机交互与技术哲学——建立了一个理解学习者如何从心理层面与拟人化人工智能导师建立关系的综合框架。我们识别出生成式人工智能对话流畅性加剧的三个持续性挑战:首先,学习者表现出双重信任校准失效——自动化偏见(不加批判地接受)与算法厌恶(错误后过度排斥)——伴随专业知识悖论,即新手过度依赖而专家依赖不足。其次,拟人化设计虽能提升参与度,但可能分散学习注意力并助长有害的情感依赖。第三,自动化悖论持续存在:旨在辅助认知的系统会引入设计者错误,通过技能闲置导致能力退化,并造成人类不擅长执行的监控负担。我们通过对人工智能生成哲学辩论与人类创作工程教程的104,984条YouTube评论进行对比分析,为这一理论综合提供实证基础,揭示了领域依赖的信任模式以及即使线索极少仍存在的强烈拟人化投射。对于工程教育,我们的综合研究要求采用差异化方案:在可通过适当支架管理自动化偏见的技术基础领域采用人工智能辅导,但在设计、伦理与专业判断等隐性知识传递不可替代的领域,仍需人类引导。