AI memory systems are evolving toward unified context layers that enable efficient cross-agent collaboration and multi-tool workflows, facilitating better accumulation of personal data and learning of user preferences. However, centralization creates a trust crisis where users must entrust cloud providers with sensitive digital memory data. We identify a core tension between personalization demands and data sovereignty: centralized memory systems enable efficient cross-agent collaboration but expose users' sensitive data to cloud provider risks, while private deployments provide security but limit collaboration. To resolve this tension, we aim to achieve local-equivalent security while enabling superior maintenance efficiency and collaborative capabilities. We propose a five-layer architecture abstracting common functional components of AI memory systems: Storage, Extraction, Learning, Retrieval, and Governance. By applying TEE protection to each layer, we establish a trustworthy framework. Based on this, we design MemTrust, a hardware-backed zero-trust architecture that provides cryptographic guarantees across all layers. Our contributions include the five-layer abstraction, "Context from MemTrust" protocol for cross-application sharing, side-channel hardened retrieval with obfuscated access patterns, and comprehensive security analysis. The architecture enables third-party developers to port existing systems with acceptable development costs, achieving system-wide trustworthiness. We believe that AI memory plays a crucial role in enhancing the efficiency and collaboration of agents and AI tools. AI memory will become the foundational infrastructure for AI agents, and MemTrust serves as a universal trusted framework for AI memory systems, with the goal of becoming the infrastructure of memory infrastructure.
翻译:AI内存系统正朝着统一上下文层演进,以实现高效的跨智能体协作与多工具工作流,促进个人数据的更好积累与用户偏好的学习。然而,中心化架构引发了信任危机,用户必须将敏感的数字记忆数据托管给云服务提供商。我们识别出个性化需求与数据主权之间的核心矛盾:中心化内存系统虽能实现高效的跨智能体协作,却将用户的敏感数据暴露于云提供商的风险之下;而私有化部署虽保障安全,却限制了协作能力。为解决这一矛盾,我们的目标是在实现本地级安全性的同时,提供更优的维护效率与协同能力。我们提出了一个五层架构,对AI内存系统的通用功能组件进行抽象:存储层、提取层、学习层、检索层与治理层。通过对每层应用可信执行环境(TEE)保护,我们建立了一个可信框架。在此基础上,我们设计了MemTrust——一种基于硬件的零信任架构,为所有层级提供密码学保证。我们的贡献包括:五层抽象模型、支持跨应用共享的“Context from MemTrust”协议、采用访问模式混淆技术的侧信道硬化检索机制,以及全面的安全分析。该架构使第三方开发者能够以可接受的开发成本移植现有系统,实现系统级的可信性。我们相信,AI内存在提升智能体与AI工具的效率及协作能力方面发挥着关键作用。AI内存将成为AI智能体的基础架构,而MemTrust则作为AI内存系统的通用可信框架,致力于成为内存基础设施的基础设施。