Large Language Models (LLMs) have significantly advanced code generation but often require substantial resources and tend to over-generalize, limiting their efficiency for specific tasks. Fine-tuning smaller, open-source LLMs presents a viable alternative; however, it typically lags behind cutting-edge models due to supervised fine-tuning's reliance solely on correct code examples, which restricts the model's ability to learn from its own mistakes and adapt to diverse programming challenges. To bridge this gap, we introduce CodeLutra, a novel framework that enhances low-performing LLMs by leveraging both successful and failed code generation attempts. Unlike conventional fine-tuning, CodeLutra employs an iterative preference learning mechanism to compare correct and incorrect solutions as well as maximize the likelihood of correct codes. Through continuous iterative refinement, CodeLutra enables smaller LLMs to match or surpass GPT-4's performance in various code generation tasks without relying on vast external datasets or larger auxiliary models. On a challenging data analysis task, using just 500 samples improved Llama-3-8B's accuracy from 28.2% to 48.6%, approaching GPT-4's performance. These results highlight CodeLutra's potential to close the gap between open-source and closed-source models, making it a promising approach in the field of code generation.
翻译:大语言模型(LLMs)在代码生成方面取得了显著进展,但通常需要大量资源且倾向于过度泛化,这限制了其在特定任务上的效率。对更小规模的开源LLMs进行微调是一种可行的替代方案;然而,由于监督式微调仅依赖于正确的代码示例,这限制了模型从自身错误中学习以及适应多样化编程挑战的能力,因此其性能通常落后于前沿模型。为了弥合这一差距,我们提出了CodeLutra,这是一个新颖的框架,通过利用成功和失败的代码生成尝试来增强性能较低的LLMs。与传统的微调方法不同,CodeLutra采用了一种迭代式偏好学习机制,来比较正确与错误的解决方案,并最大化正确代码的可能性。通过持续的迭代精炼,CodeLutra使得较小的LLMs能够在各种代码生成任务中匹配甚至超越GPT-4的性能,而无需依赖庞大的外部数据集或更大的辅助模型。在一项具有挑战性的数据分析任务中,仅使用500个样本就将Llama-3-8B的准确率从28.2%提升至48.6%,接近了GPT-4的水平。这些结果凸显了CodeLutra在缩小开源模型与闭源模型之间性能差距方面的潜力,使其成为代码生成领域一个前景广阔的方法。