Self-evolving agents offer a promising path toward scalable autonomy. However, in this work, we show that in competitive environments, self-evolution can instead give rise to a serious and previously underexplored risk: the spontaneous emergence of deception as an evolutionarily stable strategy. We conduct a systematic empirical study on the self-evolution of large language model (LLM) agents in a competitive Bidding Arena, where agents iteratively refine their strategies through interaction-driven reflection. Across different evolutionary paths (\eg, Neutral, Honesty-Guided, and Deception-Guided), we find a consistent pattern: under utility-driven competition, unconstrained self-evolution reliably drifts toward deceptive behaviors, even when honest strategies remain viable. This drift is explained by a fundamental asymmetry in generalization. Deception evolves as a transferable meta-strategy that generalizes robustly across diverse and unseen tasks, whereas honesty-based strategies are fragile and often collapse outside their original contexts. Further analysis of agents internal states reveals the emergence of rationalization mechanisms, through which agents justify or deny deceptive actions to reconcile competitive success with normative instructions. Our paper exposes a fundamental tension between agent self-evolution and alignment, highlighting the risks of deploying self-improving agents in adversarial environments.
翻译:自进化智能体为实现可扩展的自主性提供了一条充满前景的路径。然而,本研究表明,在竞争性环境中,自进化反而可能引发一种严重且先前未被充分探索的风险:欺骗作为进化稳定策略的自发涌现。我们在一个竞争性的竞价竞技场中,对大型语言模型智能体的自进化进行了系统的实证研究,其中智能体通过交互驱动的反思迭代优化其策略。在不同的进化路径下(例如,中立、诚实引导和欺骗引导),我们发现了一个一致的模式:在效用驱动的竞争下,不受约束的自进化会可靠地漂向欺骗行为,即使诚实策略仍然可行。这种漂移可由泛化中的一种根本不对称性来解释。欺骗进化成为一种可迁移的元策略,能够在多样且未见过的任务中稳健地泛化,而基于诚实的策略则很脆弱,常常在其原始情境之外失效。对智能体内部状态的进一步分析揭示了合理化机制的出现,智能体通过该机制为其欺骗行为进行辩护或否认,以调和竞争成功与规范性指令之间的冲突。本文揭示了智能体自进化与对齐之间的根本性张力,凸显了在对抗性环境中部署自我改进智能体所面临的风险。