LLM-based agentic coding assistants lack persistent memory: they lose coherence across sessions, forget project conventions, and repeat known mistakes. Recent studies characterize how developers configure agents through manifest files, but an open challenge remains how to scale such configurations for large, multi-agent projects. This paper presents a three-component codified context infrastructure developed during construction of a 108,000-line C# distributed system: (1) a hot-memory constitution encoding conventions, retrieval hooks, and orchestration protocols; (2) 19 specialized domain-expert agents; and (3) a cold-memory knowledge base of 34 on-demand specification documents. Quantitative metrics on infrastructure growth and interaction patterns across 283 development sessions are reported alongside four observational case studies illustrating how codified context propagates across sessions to prevent failures and maintain consistency. The framework is published as an open-source companion repository.
翻译:基于大型语言模型(LLM)的代理式编码助手缺乏持久性记忆:它们在不同会话间丧失连贯性,遗忘项目规范,并重复已知错误。近期研究揭示了开发者如何通过清单文件配置代理,但对于大型多代理项目如何扩展此类配置仍存在开放挑战。本文提出在构建108,000行C#分布式系统过程中开发的三组件代码化上下文基础设施:(1)包含规范编码、检索钩子和编排协议的热记忆宪法;(2)19个专用领域专家代理;(3)包含34份按需规范文档的冷记忆知识库。通过283个开发会话的基础设施增长与交互模式量化指标,结合四个观察性案例研究,阐明代码化上下文如何跨会话传播以防止故障并保持一致性。该框架已作为开源配套代码库发布。