Information Retrieval-based Fault Localization (IRFL) techniques aim to identify source files containing the root causes of reported failures. While existing techniques excel in ranking source files, challenges persist in bug report analysis and query construction, leading to potential information loss. Leveraging large language models like GPT-4, this paper enhances IRFL by categorizing bug reports based on programming entities, stack traces, and natural language text. Tailored query strategies, the initial step in our approach (LLmiRQ), are applied to each category. To address inaccuracies in queries, we introduce a user and conversational-based query reformulation approach, termed LLmiRQ+. Additionally, to further enhance query utilization, we implement a learning-to-rank model that leverages key features such as class name match score and call graph score. This approach significantly improves the relevance and accuracy of queries. Evaluation on 46 projects with 6,340 bug reports yields an MRR of 0.6770 and MAP of 0.5118, surpassing seven state-of-the-art IRFL techniques, showcasing superior performance.
翻译:基于信息检索的故障定位(IRFL)技术旨在识别包含报告失败根本原因的源文件。虽然现有技术在源文件排序方面表现出色,但在错误报告分析和查询构建方面仍存在挑战,导致潜在的信息丢失。本文利用GPT-4等大型语言模型,通过将错误报告按编程实体、堆栈跟踪和自然语言文本进行分类来增强IRFL。针对每个类别应用定制化的查询策略,这是我们方法(LLmiRQ)的初始步骤。为解决查询中的不准确问题,我们引入了一种基于用户和对话的查询重构方法,称为LLmiRQ+。此外,为进一步提升查询利用率,我们实现了一个学习排序模型,该模型利用类名匹配分数和调用图分数等关键特征。该方法显著提高了查询的相关性和准确性。在包含6,340个错误报告的46个项目上的评估结果显示,MRR达到0.6770,MAP达到0.5118,超越了七种最先进的IRFL技术,展现出卓越的性能。