LLMs have long demonstrated remarkable effectiveness in automatic program repair (APR), with OpenAI's ChatGPT being one of the most widely used models in this domain. Through continuous iterations and upgrades of GPT-family models, their performance in fixing bugs has already reached state-of-the-art levels. However, there are few works comparing the effectiveness and variations of different versions of GPT-family models on APR. In this work, inspired by the recent public release of the GPT-o1 models, we conduct the first study to compare the effectiveness of different versions of the GPT-family models in APR. We evaluate the performance of the latest version of the GPT-family models (i.e., O1-preview and O1-mini), GPT-4o, and the historical version of ChatGPT on APR. We conduct an empirical study of the four GPT-family models against other LLMs and APR techniques on the QuixBugs benchmark from multiple evaluation perspectives, including repair success rate, repair cost, response length, and behavior patterns. The results demonstrate that O1's repair capability exceeds that of prior GPT-family models, successfully fixing all 40 bugs in the benchmark. Our work can serve as a foundation for further in-depth exploration of the applications of GPT-family models in APR.
翻译:大语言模型(LLM)在自动程序修复(APR)领域长期展现出卓越效能,其中OpenAI的ChatGPT是该领域应用最广泛的模型之一。通过GPT系列模型的持续迭代与升级,其修复缺陷的性能已达到最先进水平。然而,目前鲜有研究比较GPT系列不同版本模型在APR任务上的效能差异。本研究受近期公开的GPT-o1系列模型启发,首次系统比较了GPT系列不同版本模型在APR中的有效性。我们评估了GPT系列最新版本模型(即O1-preview和O1-mini)、GPT-4o以及历史版本ChatGPT在APR任务上的表现。通过在QuixBugs基准测试集上,从修复成功率、修复成本、响应长度和行为模式等多个评估维度,对四种GPT系列模型与其他LLM及APR技术进行了实证研究。结果表明,O1系列的修复能力超越了先前所有GPT系列模型,成功修复了基准测试中全部40个缺陷。本研究成果可为深入探索GPT系列模型在APR领域的应用奠定基础。