Generative AI has the potential to transform personalization and accessibility of education. However, it raises serious concerns about accuracy and helping students become independent critical thinkers. In this study, we designed a helpful AI "Peer" to help students correct fundamental physics misconceptions related to Newtonian mechanic concepts. In contrast to approaches that seek near-perfect accuracy to create an authoritative AI tutor or teacher, we directly inform students that this AI can answer up to 40% of questions incorrectly. In a randomized controlled trial with 165 students, those who engaged in targeted dialogue with the AI Peer achieved post-test scores that were, on average, 10.5 percentage points higher - with over 20 percentage points higher normalized gain - than a control group that discussed physics history. Qualitative feedback indicated that 91% of the treatment group's AI interactions were rated as helpful. Furthermore, by comparing student performance on pre- and post-test questions about the same concept, along with experts' annotations of the AI interactions, we find initial evidence suggesting the improvement in performance does not depend on the correctness of the AI. With further research, the AI Peer paradigm described here could open new possibilities for how we learn, adapt to, and grow with AI.
翻译:生成式人工智能具有变革教育个性化与可及性的潜力,然而其准确性与培养学生独立批判性思维的能力引发了严重关切。本研究设计了一个有益的AI"同伴",用于帮助学生纠正与牛顿力学概念相关的基础物理学迷思概念。与追求近乎完美的准确性以构建权威性AI导师或教师的方法不同,我们直接告知学生该AI可能对高达40%的问题给出错误答案。在一项涉及165名学生的随机对照试验中,与AI同伴进行针对性对话的实验组后测成绩平均比讨论物理学史的控制组高出10.5个百分点——标准化增益超过20个百分点。定性反馈显示,实验组中91%的AI互动被评价为有益。此外,通过比较学生在同一概念的前后测表现,并结合专家对AI互动的标注分析,我们发现了初步证据表明:成绩提升并不依赖于AI回答的正确性。通过进一步研究,本文所述的AI同伴范式可能为人类如何学习、适应并与AI共同成长开辟新的可能性。