Penetration-testing is crucial for identifying system vulnerabilities, with privilege-escalation being a critical subtask to gain elevated access to protected resources. Language Models (LLMs) presents new avenues for automating these security practices by emulating human behavior. However, a comprehensive understanding of LLMs' efficacy and limitations in performing autonomous Linux privilege-escalation attacks remains under-explored. To address this gap, we introduce hackingBuddyGPT, a fully automated LLM-driven prototype designed for autonomous Linux privilege-escalation. We curated a novel, publicly available Linux privilege-escalation benchmark, enabling controlled and reproducible evaluation. Our empirical analysis assesses the quantitative success rates and qualitative operational behaviors of various LLMs -- GPT-3.5-Turbo, GPT-4-Turbo, and Llama3 -- against baselines of human professional pen-testers and traditional automated tools. We investigate the impact of context management strategies, different context sizes, and various high-level guidance mechanisms on LLM performance. Results show that GPT-4-Turbo demonstrates high efficacy, successfully exploiting 33-83% of vulnerabilities, a performance comparable to human pen-testers (75%). In contrast, local models like Llama3 exhibited limited success (0-33%), and GPT-3.5-Turbo achieved moderate rates (16-50%). We show that both high-level guidance and state-management through LLM-driven reflection significantly boost LLM success rates. Qualitative analysis reveals both LLMs' strengths and weaknesses in generating valid commands and highlights challenges in common-sense reasoning, error handling, and multi-step exploitation, particularly with temporal dependencies. Cost analysis indicates that GPT-4-Turbo can achieve human-comparable performance at competitive costs, especially with optimized context management.
翻译:渗透测试对于识别系统漏洞至关重要,其中权限提升是获取受保护资源更高访问权限的关键子任务。语言模型通过模拟人类行为,为自动化这些安全实践提供了新途径。然而,对于大型语言模型在执行自主Linux权限提升攻击方面的效能与局限性的全面理解仍显不足。为填补这一空白,我们提出了hackingBuddyGPT,一个专为自主Linux权限提升设计的全自动大型语言模型驱动原型。我们构建了一个新颖、公开可用的Linux权限提升基准测试集,支持受控且可复现的评估。我们的实证分析评估了多种大型语言模型——GPT-3.5-Turbo、GPT-4-Turbo和Llama3——在定量成功率与定性操作行为方面的表现,并以人类专业渗透测试人员和传统自动化工具作为基线。我们研究了上下文管理策略、不同上下文规模以及多种高层指导机制对大型语言模型性能的影响。结果显示,GPT-4-Turbo表现出高效能,成功利用了33-83%的漏洞,其性能与人类渗透测试人员(75%)相当。相比之下,Llama3等本地模型成功率有限(0-33%),而GPT-3.5-Turbo取得了中等成功率(16-50%)。我们证明,通过大型语言模型驱动的反思实现的高层指导与状态管理能显著提升大型语言模型的成功率。定性分析揭示了大型语言模型在生成有效命令方面的优势与不足,并凸显了其在常识推理、错误处理以及多步骤利用(尤其是具有时间依赖性的场景)方面面临的挑战。成本分析表明,GPT-4-Turbo能以具有竞争力的成本实现与人类相当的性能,特别是在优化上下文管理的情况下。