Large Language Models (LLMs) such as ChatGPT-4, Claude 3, and LLaMA 4 are increasingly embedded in software/application development, supporting tasks from code generation to debugging. Yet, their real-world effectiveness in detecting diverse software bugs, particularly complex, security-relevant vulnerabilities, remains underexplored. This study presents a systematic, empirical evaluation of these three leading LLMs using a benchmark of foundational programming errors, classic security flaws, and advanced, production-grade bugs in C++ and Python. The dataset integrates real code from SEED Labs, OpenSSL (via the Suresoft GLaDOS database), and PyBugHive, validated through local compilation and testing pipelines. A novel multi-stage, context-aware prompting protocol simulates realistic debugging scenarios, while a graded rubric measures detection accuracy, reasoning depth, and remediation quality. Our results show that all models excel at identifying syntactic and semantic issues in well-scoped code, making them promising for educational use and as first-pass reviewers in automated code auditing. Performance diminishes in scenarios involving complex security vulnerabilities and large-scale production code, with ChatGPT-4 and Claude 3 generally providing more nuanced contextual analyses than LLaMA 4. This highlights both the promise and the present constraints of LLMs in serving as reliable code analysis tools.
翻译:以ChatGPT-4、Claude 3和LLaMA 4为代表的大型语言模型正日益深入软件/应用开发流程,支撑着从代码生成到调试的各类任务。然而,这些模型在检测多样化软件缺陷(尤其是复杂的安全相关漏洞)方面的实际效能仍未得到充分探索。本研究通过集成基础编程错误、经典安全漏洞及生产级高级缺陷的基准测试,对上述三种主流大型语言模型进行了系统化实证评估。数据集融合了来自SEED Labs、OpenSSL(通过Suresoft GLaDOS数据库)及PyBugHive的真实代码,并经过本地编译与测试流水线验证。研究采用创新的多阶段情境感知提示协议模拟真实调试场景,同时通过分级评估标准量化检测准确率、推理深度与修复质量。实验结果表明:所有模型在识别边界清晰代码中的语法与语义问题时表现卓越,展现出在教育应用和自动化代码审计初筛场景中的潜力。但在涉及复杂安全漏洞与大规模生产代码的场景中,模型性能显著下降,其中ChatGPT-4与Claude 3通常能提供比LLaMA 4更精细的情境分析。这既揭示了大型语言模型作为可靠代码分析工具的发展前景,也凸显了其当前存在的局限性。