This work investigates large language models (LLMs) as teachable agents for learning by teaching (LBT). LBT with teachable agents helps learners identify knowledge gaps and discover new knowledge. However, teachable agents require expensive programming of subject-specific knowledge. While LLMs as teachable agents can reduce the cost, LLMs' expansive knowledge as tutees discourages learners from teaching. We propose a prompting pipeline that restrains LLMs' knowledge and makes them initiate "why" and "how" questions for effective knowledge-building. We combined these techniques into TeachYou, an LBT environment for algorithm learning, and AlgoBo, an LLM-based tutee chatbot that can simulate misconceptions and unawareness prescribed in its knowledge state. Our technical evaluation confirmed that our prompting pipeline can effectively configure AlgoBo's problem-solving performance. Through a between-subject study with 40 algorithm novices, we also observed that AlgoBo's questions led to knowledge-dense conversations (effect size=0.71). Lastly, we discuss design implications, cost-efficiency, and personalization of LLM-based teachable agents.
翻译:本研究探讨了将大语言模型(LLMs)作为可教学智能体,用于"通过教学来学习"(LBT)范式。具有可教学智能体的LBT能帮助学习者识别知识缺口并发现新知识。然而,传统可教学智能体需要为特定学科知识编写昂贵的程序。虽然LLMs作为可教学智能体可以降低成本,但作为学习者的LLMs拥有广博知识,反而会抑制学习者的教学意愿。我们提出了一种提示词处理流程,该流程能限制LLMs的知识储备,并促使它们提出"为什么"和"如何"式问题,以促进有效知识构建。我们将这些技术整合到TeachYou(一个用于算法学习的LBT环境)和AlgoBo(一个基于LLM的聊天机器人学员)中,后者能模拟其知识状态中预设的概念误解和认知盲区。技术评估证实,我们的提示词处理流程能有效配置AlgoBo的问题求解能力。通过对40名算法初学者的组间实验,我们观察到AlgoBo提出的问题能引发知识密集型对话(效应量=0.71)。最后,我们讨论了基于LLM的可教学智能体的设计启示、成本效益及个性化实施路径。