The ability to generate diverse solutions to a given problem is a hallmark of human creativity. This divergent reasoning is also crucial for machines, enhancing their robustness and enabling them to assist humans in many applications such as scientific discovery. However, existing approaches to multi-step reasoning with large language models (LLMs) have mostly focused only on reasoning accuracy, without further discovering more diverse valid solutions. For example, supervised fine-tuning can improve LLM reasoning quality, but requires extensive supervised data to capture the full range of possible solutions. Reinforcement learning aims to find limited highest-reward solutions while neglecting the solution diversity. To fill this gap, we propose Flow of Reasoning (FoR), an efficient diversity-seeking LLM finetuning method aimed at improving reasoning quality and diversity with minimal data. FoR formulates multi-step LLM reasoning as a Markovian flow on a DAG-structured reasoning graph. This formulation allows us to incorporate and adapt principled GFlowNet approaches, for finetuning LLMs to sample diverse reasoning paths with probabilities proportional to the (unnormalized) reward of target problems. Extensive experiments show that, with limited training examples (e.g., 15 examples), FoR enables the discovery of diverse, creative, high-quality solutions, greatly outperforming a wide range of existing inference and training methods across five challenging puzzle-solving tasks, including BlocksWorld (embodied reasoning), Game24 (math puzzle solving), Rubik's Cube (spatial reasoning), 1D-ARC (abstraction reasoning), and PrOntoQA (logical reasoning). Code is available at https://github.com/Yu-Fangxu/FoR.
翻译:生成给定问题的多样化解决方案是人类创造力的标志。这种发散性推理对机器同样至关重要,它能增强机器的鲁棒性,并使其能够在科学发现等诸多应用中辅助人类。然而,现有的大语言模型多步推理方法大多仅关注推理准确性,未能进一步发掘更多样化的有效解决方案。例如,监督微调可提升大语言模型的推理质量,但需要大量监督数据才能覆盖所有可能的解决方案范围。强化学习旨在寻找有限的最优奖励解,却忽视了解决方案的多样性。为填补这一空白,我们提出推理流方法,这是一种高效追求多样性的大语言模型微调方法,旨在以最少数据提升推理质量和多样性。该方法将多步大语言模型推理建模为有向无环图推理结构上的马尔可夫流。该形式化框架使我们能够整合并改进基于原则的GFlowNet方法,用于微调大语言模型,使其能够以与目标问题奖励值成正比的概率采样多样化推理路径。大量实验表明,在有限训练示例条件下,该方法能够发现多样、创新、高质量的解决方案,在五个具有挑战性的谜题求解任务中显著优于现有多种推理与训练方法,这些任务包括:BlocksWorld、Game24、Rubik's Cube、1D-ARC和PrOntoQA。相关代码已发布于https://github.com/Yu-Fangxu/FoR。