Commit messages play a key role in documenting the intent behind code changes. However, they are often low-quality, vague, or incomplete, limiting their usefulness. Commit Message Generation (CMG) aims to automatically generate descriptive commit messages from code diffs to reduce developers' effort and improve message quality. Although recent advances in LLMs have shown promise in automating CMG, their performance remains limited. This paper aims to enhance CMG performance by retrieving similar diff-message pairs to guide LLMs to generate commit messages that are more precise and informative. We proposed CoRaCMG, a Contextual Retrieval-augmented framework for Commit Message Generation, structured in three phases: (1) Retrieve: retrieving the similar diff-message pairs; (2) Augment: combining them with the query diff into a structured prompt; and (3) Generate: generating commit messages corresponding to the query diff via LLMs. CoRaCMG enables LLMs to learn project-specific terminologies and writing styles from the retrieved diff-message pairs. We evaluated CoRaCMG across multiple LLMs (e.g., GPT, DeepSeek, and Qwen) and compared its performance against SOTA baselines. Experimental results show that CoRaCMG significantly boosts LLM performance across four metrics (BLEU, Rouge-L, METEOR, and CIDEr). Specifically, DeepSeek-R1 achieves relative improvements of 76% in BLEU and 71% in CIDEr when augmented with a single retrieved example pair. After incorporating the single example pair, GPT-4o achieves the highest improvement rate, with BLEU increasing by 89%. Moreover, performance gains plateau after more than three examples are used, indicating diminishing returns. Further analysis shows that the improvements are attributed to the model's ability to capture the terminologies and writing styles of human-written commit messages from the retrieved example pairs.
翻译:提交信息在记录代码变更意图方面起着关键作用。然而,它们通常质量低下、表述模糊或不完整,限制了其实用性。提交信息生成(CMG)旨在从代码差异中自动生成描述性的提交信息,以减少开发人员的工作量并提高信息质量。尽管大型语言模型(LLM)的最新进展在自动化CMG方面显示出潜力,但其性能仍然有限。本文旨在通过检索相似的差异-信息对来引导LLM生成更精确、信息量更丰富的提交信息,从而提升CMG性能。我们提出了CoRaCMG,一个用于提交信息生成的上下文检索增强框架,其结构包含三个阶段:(1)检索:检索相似的差异-信息对;(2)增强:将它们与查询差异组合成一个结构化提示;(3)生成:通过LLM生成与查询差异对应的提交信息。CoRaCMG使LLM能够从检索到的差异-信息对中学习项目特定的术语和写作风格。我们在多个LLM(例如GPT、DeepSeek和Qwen)上评估了CoRaCMG,并将其性能与最先进的基线进行了比较。实验结果表明,CoRaCMG在四个指标(BLEU、Rouge-L、METEOR和CIDEr)上均显著提升了LLM的性能。具体而言,当使用单个检索到的示例对进行增强时,DeepSeek-R1在BLEU和CIDEr上分别实现了76%和71%的相对提升。在融入单个示例对后,GPT-4o实现了最高的提升率,BLEU增加了89%。此外,当使用的示例超过三个后,性能增益趋于平稳,表明存在收益递减现象。进一步分析表明,性能提升归因于模型能够从检索到的示例对中捕捉人工编写提交信息的术语和写作风格。