Reverse thinking plays a crucial role in human reasoning. Humans can reason not only from a problem to a solution but also in reverse, i.e., start from the solution and reason towards the problem. This often enhances overall reasoning performance as it enables consistency checks between their forward and backward thinking. To enable Large Language Models (LLMs) to perform reverse thinking, we introduce Reverse-Enhanced Thinking (RevThink), a framework composed of data augmentation and learning objectives. In RevThink, we augment the dataset by collecting structured forward-backward reasoning from a teacher model, consisting of: (1) the original question, (2) forward reasoning, (3) backward question, and (4) backward reasoning. We then employ three objectives to train a smaller student model in a multi-task learning fashion: (a) generate forward reasoning from a question, (b) generate a backward question from a question, and (c) generate backward reasoning from the backward question. Experiments across 12 datasets covering commonsense, math, and logical reasoning show an average 13.53% improvement over the student model's zero-shot performance and a 6.84% improvement over the strongest knowledge distillation baselines. Moreover, our method demonstrates sample efficiency -- using only 10% of the correct forward reasoning from the training data, it outperforms a standard fine-tuning method trained on 10x more forward reasoning. RevThink also exhibits strong generalization to out-of-distribution held-out datasets.
翻译:逆向思维在人类推理中扮演着关键角色。人类不仅能够从问题出发推理至解决方案,也能进行逆向推理,即从解决方案出发反向推理至问题。这通常能提升整体推理表现,因为它实现了前向思维与逆向思维之间的一致性检验。为使大语言模型具备逆向思维能力,我们提出了逆向增强思维框架,该框架由数据增强和学习目标两部分构成。在逆向增强思维框架中,我们通过从教师模型中收集结构化的前向-逆向推理链来增强数据集,每条数据包含:(1) 原始问题,(2) 前向推理过程,(3) 逆向问题,(4) 逆向推理过程。随后,我们采用多任务学习方式,通过三个目标训练较小的学生模型:(a) 根据问题生成前向推理,(b) 根据问题生成逆向问题,(c) 根据逆向问题生成逆向推理。在涵盖常识推理、数学推理与逻辑推理的12个数据集上的实验表明,该方法相较于学生模型的零样本性能平均提升13.53%,相较于最强的知识蒸馏基线方法提升6.84%。此外,我们的方法展现出显著的样本效率——仅使用训练数据中10%的正确前向推理样本,其表现即优于使用10倍数量前向推理样本的标准微调方法。逆向增强思维框架在分布外测试数据集上也表现出强大的泛化能力。