Large language models (LLMs) are commonly treated as stateless: once an interaction ends, no information is assumed to persist unless it is explicitly stored and re-supplied. We challenge this assumption by introducing implicit memory-the ability of a model to carry state across otherwise independent interactions by encoding information in its own outputs and later recovering it when those outputs are reintroduced as input. This mechanism does not require any explicit memory module, yet it creates a persistent information channel across inference requests. As a concrete demonstration, we introduce a new class of temporal backdoors, which we call time bombs. Unlike conventional backdoors that activate on a single trigger input, time bombs activate only after a sequence of interactions satisfies hidden conditions accumulated via implicit memory. We show that such behavior can be induced today through straightforward prompting or fine-tuning. Beyond this case study, we analyze broader implications of implicit memory, including covert inter-agent communication, benchmark contamination, targeted manipulation, and training-data poisoning. Finally, we discuss detection challenges and outline directions for stress-testing and evaluation, with the goal of anticipating and controlling future developments. To promote future research, we release code and data at: https://github.com/microsoft/implicitMemory.
翻译:大语言模型通常被视为无状态的:一旦交互结束,除非信息被显式存储并重新输入,否则假定不会有信息持续存在。我们通过引入隐式记忆来挑战这一假设——即模型能够通过将信息编码到自身输出中,并在后续将这些输出作为输入重新引入时恢复信息,从而在原本独立的交互间保持状态。该机制无需任何显式记忆模块,却能在推理请求间创建持久的信息通道。作为具体演示,我们引入了一类新型时序后门,称之为"定时炸弹"。与在单个触发输入上激活的传统后门不同,定时炸弹仅在通过隐式记忆累积满足隐藏条件的一系列交互后才会激活。我们证明当前可通过简单的提示工程或微调诱发此类行为。除本案例研究外,我们分析了隐式记忆的更广泛影响,包括隐蔽的智能体间通信、基准测试污染、定向操控和训练数据投毒。最后,我们探讨了检测挑战,并概述了压力测试与评估的研究方向,以期预见并控制未来发展趋势。为促进后续研究,我们在 https://github.com/microsoft/implicitMemory 发布了相关代码与数据。