Automated Program Repair (APR) has evolved significantly with the advent of Large Language Models (LLMs). Fine-tuning LLMs for program repair is a recent avenue of research, with many dimensions which have not been explored. Existing work mostly fine-tune LLMs with naive code representations and does not scale to frontier models. To address this problem, we propose RepairLLaMA, a novel program repair approach that 1) identifies optimal code representations for APR with fine-tuned models, and 2) pioneers state-of-the-art parameter-efficient fine-tuning technique (PEFT) for program repair. This results in RepairLLaMA producing a highly effective `program repair adapter' for fixing bugs with AI. Our experiments demonstrate the validity of both concepts. First, fine-tuning adapters with program repair specific code representations enables the model to use meaningful repair signals and produce better patches. Second, parameter-efficient fine-tuning helps fine-tuning to converge and clearly contributes to the effectiveness of RepairLLaMA in fixing bugs outside the fine-tuning data distribution. Overall, RepairLLaMA correctly fixes 144 Defects4J v2, 109 HumanEval-Java, and 20 GitBug-Java bugs, outperforming all baselines.
翻译:随着大型语言模型(LLM)的出现,自动程序修复(APR)领域取得了显著进展。针对程序修复任务对LLM进行微调是近年来的新兴研究方向,其中仍存在许多尚未探索的维度。现有工作大多采用简单的代码表示对LLM进行微调,且难以扩展至前沿模型。为解决这一问题,我们提出RepairLLaMA——一种新颖的程序修复方法,其具备两大核心贡献:1)通过微调模型识别适用于APR任务的最优代码表示;2)率先将最先进的参数高效微调技术(PEFT)应用于程序修复领域。由此产生的RepairLLaMA能够生成高效的“程序修复适配器”,用于基于AI的缺陷修复。实验验证了这两个概念的有效性:首先,采用程序修复专用代码表示微调适配器能使模型利用更具意义的修复信号,从而生成更优补丁;其次,参数高效微调技术不仅促进微调过程的收敛,还显著提升了RepairLLaMA在微调数据分布之外的缺陷修复能力。总体而言,RepairLLaMA在Defects4J v2、HumanEval-Java和GitBug-Java数据集上分别正确修复144、109和20个缺陷,性能全面超越所有基线方法。