Engineering education faces a double disruption: traditional apprenticeship models that cultivated judgment and tacit skill are eroding, just as generative AI emerges as an informal coaching partner. This convergence rekindles long-standing questions in the philosophy of AI and cognition about the limits of computation, the nature of embodied rationality, and the distinction between information processing and wisdom. Building on this rich intellectual tradition, this paper examines whether AI chatbots can provide coaching that fosters mastery rather than merely delivering information. We synthesize critical perspectives from decades of scholarship on expertise, tacit knowledge, and human-machine interaction, situating them within the context of contemporary AI-driven education. Empirically, we report findings from a mixed-methods study (N = 75 students, N = 7 faculty) exploring the use of a coaching chatbot in engineering education. Results reveal a consistent boundary: participants accept AI for technical problem solving (convergent tasks; M = 3.84 on a 1-5 Likert scale) but remain skeptical of its capacity for moral, emotional, and contextual judgment (divergent tasks). Faculty express stronger concerns over risk (M = 4.71 vs. M = 4.14, p = 0.003), and privacy emerges as a key requirement, with 64-71 percent of participants demanding strict confidentiality. Our findings suggest that while generative AI can democratize access to cognitive and procedural support, it cannot replicate the embodied, value-laden dimensions of human mentorship. We propose a multiplex coaching framework that integrates human wisdom within expert-in-the-loop models, preserving the depth of apprenticeship while leveraging AI scalability to enrich the next generation of engineering education.
翻译:工程教育正面临双重颠覆:一方面,培养判断力与隐性技能的传统学徒模式正在式微;另一方面,生成式人工智能作为非正式指导伙伴应运而生。这种交汇重新点燃了人工智能与认知哲学中关于计算极限、具身理性本质以及信息处理与智慧区别的长期争论。基于这一丰富的思想传统,本文探讨人工智能聊天机器人能否提供促进精通的指导,而非仅仅传递信息。我们综合了数十年来关于专业知识、隐性知识及人机交互研究的批判性观点,并将其置于当代人工智能驱动教育的语境中。在实证层面,我们报告了一项混合方法研究(N = 75名学生,N = 7名教师)的结果,该研究探索了指导型聊天机器人在工程教育中的应用。研究结果揭示了一个明确的边界:参与者接受人工智能用于技术问题解决(收敛性任务;李克特5点量表得分M = 3.84),但对其在道德、情感和情境判断(发散性任务)方面的能力仍持怀疑态度。教师对风险的担忧更为强烈(M = 4.71对比M = 4.14,p = 0.003),隐私保护成为关键需求,64-71%的参与者要求严格的保密性。我们的研究表明,虽然生成式人工智能能够普及认知与程序性支持,却无法复现人类导师所具有的具身化、价值负载的维度。我们提出一种多元指导框架,将人类智慧整合到专家在环模型中,在保持学徒制深度的同时,利用人工智能的可扩展性来丰富下一代工程教育。