AI-assisted software generation has increased development speed, but it has also amplified a persistent engineering problem: systems that are functionally correct may still be structurally insecure. In practice, prompt-based security review with large language models often suffers from uneven coverage, weak reproducibility, unsupported findings, and the absence of an immutable audit trail. The ESAA architecture addresses a related governance problem in agentic software engineering by separating heuristic agent cognition from deterministic state mutation through append-only events, constrained outputs, and replay-based verification. This paper presents ESAA-Security, a domain-specific specialization of ESAA for agent-assisted security auditing of software repositories, with particular emphasis on AI-generated or AI-modified code. ESAA-Security structures auditing as a governed execution pipeline with four phases reconnaissance, domain audit execution, risk classification, and final reporting and operationalizes the workflow into 26 tasks, 16 security domains, and 95 executable checks. The framework produces structured check results, vulnerability inventories, severity classifications, risk matrices, remediation guidance, executive summaries, and a final markdown/JSON audit report. The central idea is that security review should not be modeled as a free-form conversation with an LLM, but as an evidence-oriented audit process governed by contracts and events. In ESAA-Security, agents emit structured intentions under constrained protocols; the orchestrator validates them, persists accepted outputs to an append-only log, reprojects derived views, and verifies consistency through replay and hashing. The result is a traceable, reproducible, and risk-oriented audit architecture whose final report is auditable by construction.
翻译:AI辅助的软件生成提升了开发速度,但也加剧了一个长期存在的工程问题:功能正确的系统在结构上仍可能不安全。在实践中,基于提示的大语言模型安全审查常面临覆盖不均、可复现性弱、发现结果缺乏支撑以及缺少不可变审计追踪等问题。ESAA架构通过仅追加事件、受限输出和基于回放的验证,将启发式智能体认知与确定性状态变更相分离,从而解决了智能体软件工程中相关的治理问题。本文提出ESAA-Security,这是ESAA面向软件仓库智能体辅助安全审计的领域专用特化方案,尤其侧重于AI生成或AI修改的代码。ESAA-Security将审计构建为一个受治理的执行流水线,包含四个阶段:侦察、领域审计执行、风险分类和最终报告,并将该工作流具体化为26项任务、16个安全领域和95项可执行检查。该框架生成结构化的检查结果、漏洞清单、严重性分类、风险矩阵、修复指导、执行摘要以及最终的Markdown/JSON审计报告。其核心思想是:安全审查不应被建模为与大语言模型的自由对话,而应是由合约和事件治理的、以证据为导向的审计过程。在ESAA-Security中,智能体在受限协议下发出结构化意图;编排器验证这些意图,将接受的输出持久化到仅追加日志中,重新投影衍生视图,并通过回放和哈希验证一致性。最终构建出一个可追踪、可复现、以风险为导向的审计架构,其最终报告在构造上即是可审计的。