AI agents are increasingly deployed in production, yet their security evaluations remain bottlenecked by manual red-teaming or static benchmarks that fail to model adaptive, multi-turn adversaries. We propose NAAMSE, an evolutionary framework that reframes agent security evaluation as a feedback-driven optimization problem. Our system employs a single autonomous agent that orchestrates a lifecycle of genetic prompt mutation, hierarchical corpus exploration, and asymmetric behavioral scoring. By using model responses as a fitness signal, the framework iteratively compounds effective attack strategies while simultaneously ensuring "benign-use correctness", preventing the degenerate security of blanket refusal. Our experiments on Gemini 2.5 Flash demonstrate that evolutionary mutation systematically amplifies vulnerabilities missed by one-shot methods, with controlled ablations revealing that the synergy between exploration and targeted mutation uncovers high-severity failure modes. We show that this adaptive approach provides a more realistic and scalable assessment of agent robustness in the face of evolving threats. The code for NAAMSE is open source and available at https://github.com/HASHIRU-AI/NAAMSE.
翻译:人工智能智能体在生产环境中日益普及,但其安全评估仍受限于人工红队测试或静态基准测试,这些方法无法模拟自适应、多轮次的对抗行为。我们提出NAAMSE,这是一个进化式框架,将智能体安全评估重新定义为反馈驱动的优化问题。该系统采用单一自主智能体,协调执行遗传提示突变、分层语料库探索与非对称行为评分的完整生命周期。通过将模型响应作为适应度信号,该框架能迭代式地复合有效攻击策略,同时确保“良性使用正确性”,避免因全面拒绝而产生的退化安全性。我们在Gemini 2.5 Flash上的实验表明:进化式突变能系统性放大单次测试方法遗漏的漏洞,受控消融实验揭示探索机制与定向突变的协同作用可发现高严重性失效模式。研究证明这种自适应方法能为智能体在持续演变的威胁环境中的鲁棒性提供更真实、可扩展的评估。NAAMSE代码已开源,详见https://github.com/HASHIRU-AI/NAAMSE。