Production LLM agents with tool-using capabilities require security testing despite their safety training. We adapt Go-Explore to evaluate GPT-4o-mini across 28 experimental runs spanning six research questions. We find that random-seed variance dominates algorithmic parameters, yielding an 8x spread in outcomes; single-seed comparisons are unreliable, while multi-seed averaging materially reduces variance in our setup. Reward shaping consistently harms performance, causing exploration collapse in 94% of runs or producing 18 false positives with zero verified attacks. In our environment, simple state signatures outperform complex ones. For comprehensive security testing, ensembles provide attack-type diversity, whereas single agents optimize coverage within a given attack type. Overall, these results suggest that seed variance and targeted domain knowledge can outweigh algorithmic sophistication when testing safety-trained models.
翻译:具备工具使用能力的生产级LLM智能体尽管经过安全训练,仍需要进行安全测试。我们将Go-Explore方法进行调整,以评估GPT-4o-mini模型,共进行了涵盖六个研究问题的28次实验运行。研究发现,随机种子方差对算法参数的影响占主导地位,导致结果产生高达8倍的差异;单种子比较结果不可靠,而在我们的实验设置中,多种子平均法能实质性地降低方差。奖励塑形持续损害性能,在94%的运行中导致探索崩溃,或产生18个零验证攻击的误报。在我们的实验环境中,简单的状态特征表示优于复杂的特征表示。为实现全面的安全测试,集成方法能提供攻击类型的多样性,而单一智能体则能在给定攻击类型内优化覆盖范围。总体而言,这些结果表明,在测试经过安全训练的模型时,种子方差和针对性领域知识的影响可能超过算法复杂性的影响。