Large language models (LLMs) have exhibited complex reasoning abilities by generating question rationales and demonstrated exceptional performance in natural language processing (NLP) tasks. However, these reasoning capabilities generally emerge in models with tens of billions of parameters, creating significant computational challenges for real-world deployment. Recent research has concentrated on improving open-source smaller models through knowledge distillation (KD) from commercial LLMs. Nevertheless, most of these studies rely solely on the responses from one single LLM as the gold rationale for training. In this paper, we introduce a novel Mistake-Aware Peer-Review Distillation (MAPD) approach: 1) Instead of merely obtaining gold rationales from teachers, our method asks teachers to identify and explain the student's mistakes, providing customized instruction learning data. 2) We design a simulated peer-review process between teacher LLMs, which selects only the generated rationales above the acceptance threshold. This reduces the chance of teachers guessing correctly with flawed rationale, improving instructional data quality. Comprehensive experiments and analysis on mathematical, commonsense, and logical reasoning tasks demonstrate the effectiveness of our method.
翻译:大型语言模型(LLM)通过生成问题推理过程展现出复杂的推理能力,并在自然语言处理(NLP)任务中表现出卓越性能。然而,这些推理能力通常仅在具有数百亿参数的模型中显现,为实际部署带来了显著的计算挑战。近期研究集中于通过从商业LLM进行知识蒸馏(KD)来改进开源小型模型。然而,这些研究大多仅依赖单一LLM的响应作为训练的标准推理依据。本文提出了一种新颖的错题感知同行评审蒸馏(MAPD)方法:1)我们的方法不仅从教师模型中获取标准推理,还要求教师识别并解释学生的错误,从而提供定制化的指导学习数据。2)我们在教师LLM之间设计了模拟同行评审流程,仅筛选出高于接受阈值的生成推理。这降低了教师通过有缺陷的推理猜测正确的可能性,提升了指导数据质量。在数学、常识和逻辑推理任务上的全面实验与分析证明了我们方法的有效性。