Large Language Models (LLMs) have showcased impressive capabilities in handling straightforward programming tasks. However, their performance tends to falter when confronted with more challenging programming problems. We observe that conventional models often generate solutions as monolithic code blocks, restricting their effectiveness in tackling intricate questions. To overcome this limitation, we present Modular-of-Thought Coder (MoTCoder). We introduce a pioneering framework for MoT instruction tuning, designed to promote the decomposition of tasks into logical sub-tasks and sub-modules. Our investigations reveal that, through the cultivation and utilization of sub-modules, MoTCoder significantly improves both the modularity and correctness of the generated solutions, leading to substantial relative pass@1 improvements of 12.9% on APPS and 9.43% on CodeContests. Our codes are available at https://github.com/dvlab-research/MoTCoder.
翻译:大语言模型在处理简单编程任务时已展现出令人印象深刻的能力,但在面对更具挑战性的编程问题时,其性能往往有所下降。我们观察到,传统模型通常将解决方案生成为单一的整体代码块,这限制了其处理复杂问题的有效性。为克服这一局限,我们提出了模块化思维编码器。我们引入了一种开创性的模块化思维指令微调框架,旨在促进将任务分解为逻辑子任务和子模块。我们的研究表明,通过培养和利用子模块,MoTCoder显著提升了生成解决方案的模块化程度与正确性,在APPS和CodeContests数据集上分别实现了12.9%和9.43%的相对pass@1指标提升。相关代码已发布于https://github.com/dvlab-research/MoTCoder。