As LLMs move from text completion toward autonomous agents, they remain constrained by the standard chat interface, which lacks private working memory. This raises a fundamental question: can agents reliably perform interactive tasks that depend on hidden state? We define Private State Interactive Tasks (PSITs), which require agents to generate and maintain hidden information while producing consistent public responses. We show theoretically that any agent restricted to the public conversation history cannot simultaneously preserve secrecy and consistency in PSITs, yielding an impossibility theorem. To empirically validate this limitation, we introduce a self-consistency testing protocol that evaluates whether agents can maintain a hidden secret across forked dialogue branches. Standard chat-based LLMs and retrieval-based memory baselines fail this test regardless of scale, demonstrating that semantic retrieval does not enable true state maintenance. To address this, we propose a novel architecture incorporating an explicit private working memory; we demonstrate that this mechanism restores consistency, establishing private state as a necessary component for interactive language agents.
翻译:随着大语言模型从文本补全向自主智能体演进,它们仍受限于标准对话接口的约束,该接口缺乏私有工作记忆。这引发了一个根本性问题:智能体能否可靠地执行依赖隐藏状态的交互式任务?我们定义了私有状态交互任务,这类任务要求智能体在生成一致公开回应的同时,创建并维护隐藏信息。我们从理论上证明,任何受限于公开对话历史的智能体都无法在私有状态交互任务中同时保证秘密性与一致性,从而推导出一个不可能性定理。为实证验证这一局限性,我们提出了一种自洽性测试协议,用于评估智能体能否在分叉对话分支中维护隐藏秘密。实验表明,基于标准对话接口的大语言模型及基于检索的记忆基线方法无论规模大小均无法通过该测试,这证明语义检索无法实现真正的状态维护。为解决此问题,我们提出了一种融合显式私有工作记忆的新型架构;实验证明该机制能恢复一致性,从而确立私有状态作为交互式语言智能体的必要组成部分。