While a lot of recent research focuses on enhancing the textual reasoning capabilities of Large Language Models (LLMs) by optimizing the multi-agent framework or reasoning chains, several benchmark tasks can be solved with 100\% success through direct coding, which is more scalable and avoids the computational overhead associated with textual iterating and searching. Textual reasoning has inherent limitations in solving tasks with challenges in math, logics, optimization, and searching, which is unlikely to be solved by simply scaling up the model and data size. The recently released OpenAI GPT Code Interpreter and multi-agent frameworks such as AutoGen have demonstrated remarkable proficiency of integrating code generation and execution to solve complex tasks using LLMs. However, based on our experiments on 7 existing popular methods for steering code/text generation in both single- and multi-turn settings with 14 tasks and 6 types of LLMs (including the new O1-preview), currently there is no optimal method to correctly steer LLMs to write code when needed. We discover some interesting patterns on when models use code vs. textual reasoning with the evolution to task complexity and model sizes, which even result in an astonishingly inverse scaling behavior. We also discover that results from LLM written code are not always better than using textual reasoning, even if the task could be solved through code. To mitigate the above issues, we propose three methods to better steer LLM code/text generation and achieve a notable improvement. The costs of token lengths and runtime are thoroughly discussed for all the methods. We believe the problem of steering LLM code/text generation is critical for future research and has much space for further improvement. Project Page, Datasets, and Codes are available at https://yongchao98.github.io/CodeSteer/.
翻译:尽管近期大量研究致力于通过优化多智能体框架或推理链来增强大型语言模型(LLMs)的文本推理能力,但许多基准任务实际上可通过直接编码实现100%的成功率——这种方法更具可扩展性,且能避免文本迭代与搜索带来的计算开销。文本推理在解决涉及数学、逻辑、优化及搜索等挑战性任务时存在固有局限,仅靠扩大模型与数据规模难以突破这些限制。近期发布的OpenAI GPT代码解释器以及AutoGen等多智能体框架已展现出通过集成代码生成与执行、利用LLMs解决复杂任务的卓越能力。然而,基于我们在单轮与多轮设置下对14项任务、6类LLMs(包括新型O1-preview)开展的7种现有主流代码/文本生成引导方法的实验,目前尚无最优方法能在需要时正确引导LLMs编写代码。我们发现了模型随任务复杂度与模型规模演变时选择代码与文本推理的有趣规律,甚至观察到令人惊讶的逆缩放现象。同时研究发现,即使任务可通过代码解决,LLM编写代码的结果也并非总是优于文本推理。为缓解上述问题,我们提出三种改进LLM代码/文本生成引导的方法,并取得显著性能提升。文中对所有方法的令牌长度成本与运行时间进行了深入探讨。我们认为,LLM代码/文本生成引导问题是未来研究的关键方向,且存在广阔的改进空间。项目页面、数据集及代码可通过https://yongchao98.github.io/CodeSteer/获取。