In real world software development, improper or missing exception handling can severely impact the robustness and reliability of code. Exception handling mechanisms require developers to detect, capture, and manage exceptions according to high standards, but many developers struggle with these tasks, leading to fragile code. This problem is particularly evident in open source projects and impacts the overall quality of the software ecosystem. To address this challenge, we explore the use of large language models (LLMs) to improve exception handling in code. Through extensive analysis, we identify three key issues: Insensitive Detection of Fragile Code, Inaccurate Capture of Exception Types, and Distorted Handling Solutions. These problems are widespread across real world repositories, suggesting that robust exception handling practices are often overlooked or mishandled. In response, we propose Seeker, a multi agent framework inspired by expert developer strategies for exception handling. Seeker uses agents: Scanner, Detector, Predator, Ranker, and Handler to assist LLMs in detecting, capturing, and resolving exceptions more effectively. Our work is the first systematic study on leveraging LLMs to enhance exception handling practices, providing valuable insights for future improvements in code reliability.
翻译:在实际软件开发中,不当或缺失的异常处理会严重影响代码的健壮性与可靠性。异常处理机制要求开发者依据高标准检测、捕获并管理异常,但许多开发者在此类任务上面临困难,导致代码脆弱性。该问题在开源项目中尤为显著,并影响软件生态系统的整体质量。为应对这一挑战,我们探索利用大语言模型(LLMs)改进代码中的异常处理。通过广泛分析,我们识别出三个关键问题:脆弱代码的感知不足、异常类型捕获不准确以及处理方案设计扭曲。这些问题在现实代码库中普遍存在,表明健壮的异常处理实践常被忽视或误用。为此,我们提出Seeker——一个受专业开发者异常处理策略启发的多智能体框架。Seeker通过Scanner、Detector、Predator、Ranker和Handler五个智能体协同工作,辅助LLMs更有效地检测、捕获与解决异常。本研究首次系统性地探索利用LLMs提升异常处理实践,为未来代码可靠性的改进提供了重要见解。