Automated Program Repair (APR) has evolved significantly with the advent of Large Language Models (LLMs). Fine-tuning LLMs for program repair is a recent avenue of research, with many dimensions which have not been explored. Existing work mostly fine-tunes LLMs with naive code representations and is fundamentally limited in its ability to fine-tune larger LLMs. To address this problem, we propose RepairLLaMA, a novel program repair approach that combines 1) code representations for APR and 2) the state-of-the-art parameter-efficient LLM fine-tuning technique called LoRA. This results in RepairLLaMA producing a highly effective `program repair adapter' for fixing bugs with language models. Our experiments demonstrate the validity of both concepts. First, fine-tuning adapters with program repair specific code representations enables the model to use meaningful repair signals. Second, parameter-efficient fine-tuning helps fine-tuning to converge and contributes to the effectiveness of the repair adapter to fix data-points outside the fine-tuning data distribution. Overall, RepairLLaMA correctly fixes 125 Defects4J v2 and 82 HumanEval-Java bugs, outperforming all baselines.
翻译:自动程序修复(APR)随着大型语言模型(LLM)的出现取得了显著发展。针对程序修复任务微调LLM是近期的一个研究新方向,其中尚有许多维度未被探索。现有工作大多采用朴素的代码表示对LLM进行微调,且本质上受限于对更大规模LLM的微调能力。为解决该问题,我们提出RepairLLaMA——一种新型程序修复方法,该方法结合了1)面向APR的代码表示,以及2)被称为LoRA的最先进参数高效型LLM微调技术。这使得RepairLLaMA能够生成高效的"程序修复适配器",用于通过语言模型修复缺陷。实验验证了这两个概念的有效性:首先,使用特定于程序修复的代码表示微调适配器,可使模型利用有意义的修复信号;其次,参数高效型微调有助于微调过程收敛,并提升修复适配器对超出微调数据分布的数据点进行修复的效果。总体而言,RepairLLaMA正确修复了125个Defects4J v2缺陷和82个HumanEval-Java缺陷,性能优于所有基线方法。