The field of Artificial Intelligence in Education (AIED) focuses on the intersection of technology, education, and psychology, placing a strong emphasis on supporting learners' needs with compassion and understanding. The growing prominence of Large Language Models (LLMs) has led to the development of scalable solutions within educational settings, including generating different types of feedback in Intelligent Tutoring Systems. However, the approach to utilizing these models often involves directly formulating prompts to solicit specific information, lacking a solid theoretical foundation for prompt construction and empirical assessments of their impact on learning. This work advocates careful and caring AIED research by going through previous research on feedback generation in ITS, with emphasis on the theoretical frameworks they utilized and the efficacy of the corresponding design in empirical evaluations, and then suggesting opportunities to apply these evidence-based principles to the design, experiment, and evaluation phases of LLM-based feedback generation. The main contributions of this paper include: an avocation of applying more cautious, theoretically grounded methods in feedback generation in the era of generative AI; and practical suggestions on theory and evidence-based feedback design for LLM-powered ITS.
翻译:人工智能教育领域(AIED)聚焦技术、教育与心理学的交叉,强调以同理心与理解支持学习者的需求。随着大语言模型(LLMs)日益突出,教育场景中出现了可扩展的解决方案,包括在智能辅导系统中生成不同类型的反馈。然而,当前利用这些模型的方法通常涉及直接构建提示词以获取特定信息,缺乏坚实的理论框架支撑提示构建过程,也缺乏对学习效果的实证评估。本文通过梳理智能辅导系统中反馈生成的既有研究,重点关注其运用的理论框架及相应设计在实证评估中的有效性,进而提出将这些循证原则应用于基于LLM的反馈生成的设计、实验与评估阶段的机会。本文主要贡献包括:倡导在生成式AI时代以更审慎、更具理论依据的方法进行反馈生成;以及为基于LLM的智能辅导系统提供依据理论与证据的反馈设计实用建议。