Automated Program Repair (APR) has evolved significantly with the advent of Large Language Models (LLMs). Fine-tuning LLMs for program repair is a recent avenue of research, with many dimensions which have not been explored. Existing work mostly fine-tunes LLMs with naive code representations and is fundamentally limited in its ability to fine-tune larger LLMs. To address this problem, we propose RepairLLaMA, a novel program repair approach that combines 1) code representations for APR and 2) the state-of-the-art parameter-efficient LLM fine-tuning technique called LoRA. This results in RepairLLaMA producing a highly effective `program repair adapter' for fixing bugs with language models. Our experiments demonstrate the validity of both concepts. First, fine-tuning adapters with program repair specific code representations enables the model to use meaningful repair signals. Second, parameter-efficient fine-tuning helps fine-tuning to converge and contributes to the effectiveness of the repair adapter to fix data-points outside the fine-tuning data distribution. Overall, RepairLLaMA correctly fixes 125 Defects4J v2 and 82 HumanEval-Java bugs, outperforming all baselines.
翻译:自动程序修复(APR)随着大语言模型(LLM)的出现取得了显著进展。针对程序修复任务对LLM进行微调是近期研究方向,但仍有许多维度尚未探索。现有工作大多采用朴素的代码表示对LLM进行微调,且从根本上受限于对更大规模LLM进行微调的能力。为解决这一问题,我们提出RepairLLaMA——一种新型程序修复方法,该方法结合了:1)面向APR的代码表示,以及2)名为LoRA的最先进参数高效LLM微调技术。这使得RepairLLaMA能够生成高度有效的"程序修复适配器",用于利用语言模型修复缺陷。实验验证了这两个概念的有效性。首先,使用特定于程序修复的代码表示微调适配器,能使模型利用有意义的修复信号。其次,参数高效微调有助于微调过程收敛,并增强修复适配器对超出微调数据分布的数据点进行修复的能力。总体而言,RepairLLaMA正确修复了Defects4J v2中的125个错误和HumanEval-Java中的82个错误,性能优于所有基线方法。