True self-evolution requires agents to act as lifelong learners that internalize novel experiences to solve future problems. However, rigorously measuring this foundational capability is hindered by two obstacles: the entanglement of prior knowledge, where ``new'' knowledge may appear in pre-training data, and the entanglement of reasoning complexity, where failures may stem from problem difficulty rather than an inability to recall learned knowledge. We introduce SE-Bench, a diagnostic environment that obfuscates the NumPy library and its API doc into a pseudo-novel package with randomized identifiers. Agents are trained to internalize this package and evaluated on simple coding tasks without access to documentation, yielding a clean setting where tasks are trivial with the new API doc but impossible for base models without it. Our investigation reveals three insights: (1) the Open-Book Paradox, where training with reference documentation inhibits retention, requiring "Closed-Book Training" to force knowledge compression into weights; (2) the RL Gap, where standard RL fails to internalize new knowledge completely due to PPO clipping and negative gradients; and (3) the viability of Self-Play for internalization, proving models can learn from self-generated, noisy tasks when coupled with SFT, but not RL. Overall, SE-Bench establishes a rigorous diagnostic platform for self-evolution with knowledge internalization. Our code and dataset can be found at https://github.com/thunlp/SE-Bench.
翻译:真正的自进化要求智能体作为终身学习者,将新经验内化以解决未来问题。然而,严谨衡量这一基础能力面临两大障碍:先验知识的纠缠(即“新”知识可能已出现在预训练数据中)和推理复杂性的纠缠(即失败可能源于问题难度而非无法回忆已学知识)。我们提出SE-Bench——一个将NumPy库及其API文档通过随机标识符混淆为伪新包的诊断环境。智能体通过训练内化该包,并在无法访问文档的情况下完成简单编码任务进行评估,从而构建出纯净测试场景:使用新API文档时任务极为简单,而无此文档的基础模型则完全无法解决。我们的研究揭示了三个关键发现:(1)开卷悖论:使用参考文档进行训练会抑制知识留存,必须通过“闭卷训练”强制知识压缩至模型权重;(2)强化学习差距:标准强化学习因PPO裁剪和负梯度问题无法完全内化新知识;(3)自我博弈内化的可行性:证明模型结合监督微调(而非强化学习)时,能够从自生成的噪声任务中学习。总体而言,SE-Bench为基于知识内化的自进化研究建立了严谨的诊断平台。代码与数据集详见 https://github.com/thunlp/SE-Bench。