Iteratively improving and repairing source code with large language models (LLMs), known as refinement, has emerged as a popular way of generating programs that would be too complex to construct in one shot. Given a bank of test cases, together with a candidate program, an LLM can improve that program by being prompted with failed test cases. But it remains an open question how to best iteratively refine code, with prior work employing simple greedy or breadth-first strategies. We show here that refinement exposes an explore-exploit tradeoff: exploit by refining the program that passes the most test cases, or explore by refining a lesser considered program. We frame this as an arm-acquiring bandit problem, which we solve with Thompson Sampling. The resulting LLM-based program synthesis algorithm is broadly applicable: Across loop invariant synthesis, visual reasoning puzzles, and competition programming problems, we find that our new method can solve more problems using fewer language model calls.
翻译:利用大语言模型(LLMs)迭代改进和修复源代码(即代码精炼)已成为生成过于复杂而无法一次性构建程序的主流方法。给定测试用例集和候选程序,大语言模型可通过接收失败测试用例的提示来改进该程序。然而,如何最优地进行迭代代码精炼仍是一个开放性问题,先前研究主要采用简单的贪婪或广度优先策略。本文揭示精炼过程存在探索与利用的权衡:通过精炼通过最多测试用例的程序进行“利用”,或通过精炼较少被考虑的程序进行“探索”。我们将此问题建模为臂获取赌博机问题,并采用汤普森采样进行求解。由此产生的大语言模型程序合成算法具有广泛适用性:在循环不变式生成、视觉推理谜题和竞赛编程问题中,我们发现新方法能够以更少的大语言模型调用次数解决更多问题。