Context: The rapid evolution of Large Language Models (LLMs) has sparked significant interest in leveraging their capabilities for automating code review processes. Prior studies often focus on developing LLMs for code review automation, yet require expensive resources, which is infeasible for organizations with limited budgets and resources. Thus, fine-tuning and prompt engineering are the two common approaches to leveraging LLMs for code review automation. Objective: We aim to investigate the performance of LLMs-based code review automation based on two contexts, i.e., when LLMs are leveraged by fine-tuning and prompting. Fine-tuning involves training the model on a specific code review dataset, while prompting involves providing explicit instructions to guide the model's generation process without requiring a specific code review dataset. Method: We leverage model fine-tuning and inference techniques (i.e., zero-shot learning, few-shot learning and persona) on LLMs-based code review automation. In total, we investigate 12 variations of two LLMs-based code review automation (i.e., GPT- 3.5 and Magicoder), and compare them with the Guo et al.'s approach and three existing code review automation approaches. Results: The fine-tuning of GPT 3.5 with zero-shot learning helps GPT-3.5 to achieve 73.17% -74.23% higher EM than the Guo et al.'s approach. In addition, when GPT-3.5 is not fine-tuned, GPT-3.5 with few-shot learning achieves 46.38% - 659.09% higher EM than GPT-3.5 with zero-shot learning. Conclusions: Based on our results, we recommend that (1) LLMs for code review automation should be fine-tuned to achieve the highest performance; and (2) when data is not sufficient for model fine-tuning (e.g., a cold-start problem), few-shot learning without a persona should be used for LLMs for code review automation.
翻译:上下文:大型语言模型的快速发展激发了人们利用其能力来自动化代码审查流程的浓厚兴趣。先前的研究通常专注于为代码审查自动化开发大型语言模型,但这需要昂贵的资源,对于预算和资源有限的组织来说不可行。因此,微调和提示工程是利用大型语言模型实现代码审查自动化的两种常见方法。目标:我们旨在研究基于两种上下文的大型语言模型代码审查自动化的性能,即通过微调和提示来利用大型语言模型。微调涉及在特定的代码审查数据集上训练模型,而提示则涉及提供明确的指令来引导模型的生成过程,无需特定的代码审查数据集。方法:我们利用模型微调和推理技术(即零样本学习、少样本学习和角色设定)来实现基于大型语言模型的代码审查自动化。总共,我们研究了两种大型语言模型(即GPT-3.5和Magicoder)代码审查自动化的12种变体,并将其与Guo等人的方法及三种现有的代码审查自动化方法进行了比较。结果:采用零样本学习对GPT-3.5进行微调,使其精确匹配率比Guo等人的方法高出73.17%至74.23%。此外,当GPT-3.5未经微调时,采用少样本学习的GPT-3.5比采用零样本学习的GPT-3.5的精确匹配率高出46.38%至659.09%。结论:基于我们的结果,我们建议:(1)用于代码审查自动化的大型语言模型应进行微调以达到最佳性能;(2)当数据不足以支持模型微调时(例如冷启动问题),应使用不带角色设定的少样本学习用于代码审查自动化的大型语言模型。