Self-evolving large language model (LLM) agents continually improve by accumulating and reusing past experience, yet it remains unclear whether they faithfully rely on that experience to guide their behavior. We present the first systematic investigation of experience faithfulness, the causal dependence of an agent's decisions on the experience it is given, in self-evolving LLM agents. Using controlled causal interventions on both raw and condensed forms of experience, we comprehensively evaluate four representative frameworks across 10 LLM backbones and 9 environments. Our analysis uncovers a striking asymmetry: while agents consistently depend on raw experience, they often disregard or misinterpret condensed experience, even when it is the only experience provided. This gap persists across single- and multi-agent configurations and across backbone scales. We trace its underlying causes to three factors: the semantic limitations of condensed content, internal processing biases that suppress experience, and task regimes where pretrained priors already suffice. These findings challenge prevailing assumptions about self-evolving methods and underscore the need for more faithful and reliable approaches to experience integration.
翻译:自我进化的大型语言模型(LLM)智能体通过积累和复用过往经验持续改进,但其行为是否真正依赖这些经验仍不明确。我们首次对自我进化LLM智能体中的经验忠实性——即智能体决策对其所获经验的因果依赖性——进行了系统性研究。通过对原始经验与压缩经验实施受控因果干预,我们在10种LLM骨干模型和9种任务环境中全面评估了四种代表性框架。分析揭示了一个显著的不对称现象:虽然智能体始终依赖原始经验,却常常忽视或曲解压缩经验,即使这是其获得的唯一经验。这种差距在单智能体与多智能体配置中均持续存在,且不受骨干模型规模影响。我们将其根本原因归结为三个因素:压缩内容的语义局限性、抑制经验内容的内在处理偏差,以及预训练先验知识已足以应对的任务场景。这些发现挑战了当前关于自我进化方法的普遍假设,并强调了开发更忠实可靠的经验整合方法的必要性。