Research on reasoning in language models (LMs) predominantly focuses on improving the correctness of their outputs. But some important applications require modeling reasoning patterns that are incorrect. For example, automated systems that can reason about and simulate student errors are useful for providing real-time feedback in the classroom or offline practice for educators-in-training. This paper presents a new method, MISTAKE, that (1) constructs high-quality synthetic examples of reasoning errors by leveraging cycle consistency between incorrect answers and latent misconceptions; and (2) uses the generated data to learn models for student simulation, misconception classification, and answer generation. We evaluate MISTAKE on three educational tasks and find that it results in (1) higher accuracy when simulating incorrect student answers based on specific misconceptions, (2) increased performance inferring latent misconceptions from observed incorrect answers, and (3) higher alignment with expert-written distractor answers when generating incorrect answers (e.g., for multiple-choice tests).
翻译:语言模型推理研究主要集中于提高其输出的正确性。然而某些重要应用需要对错误推理模式进行建模。例如,能够推理和模拟学生错误的自动化系统,在课堂实时反馈或教师培训的离线练习中具有重要价值。本文提出新方法MISTAKE,该方法(1)通过利用错误答案与潜在误解之间的循环一致性,构建高质量推理错误的合成示例;(2)使用生成数据学习学生模拟、误解分类和答案生成的模型。我们在三项教育任务上评估MISTAKE,发现该方法能够:(1)基于特定误解模拟错误学生答案时获得更高准确率;(2)从观察到的错误答案推断潜在误解时提升性能;(3)生成错误答案(如用于多项选择题)时与专家编写的干扰项答案保持更高一致性。