Conversation summarization loses nuanced details: when asked about coding preferences after 40 turns, summarization recalls "use type hints" but drops the critical constraint "everywhere" (19.0% exact match vs. 93.0% for our approach). We present CogCanvas, a training-free framework inspired by how teams use whiteboards to anchor shared memory. Rather than compressing conversation history, CogCanvas extracts verbatim-grounded artifacts (decisions, facts, reminders) and retrieves them via temporal-aware graph. On the LoCoMo benchmark (all 10 conversations from the ACL 2024 release), CogCanvas achieves the highest overall accuracy among training-free methods (32.4%), outperforming RAG (24.6%) by +7.8pp, with decisive advantages on complex reasoning tasks: +20.6pp on temporal reasoning (32.7% vs. 12.1% RAG) and +1.1pp on multi-hop questions (41.7% vs. 40.6% RAG). CogCanvas also leads on single-hop retrieval (26.6% vs. 24.6% RAG). Ablation studies reveal that BGE reranking contributes +7.7pp, making it the largest contributor to CogCanvas's performance. While heavily-optimized approaches achieve higher absolute scores through dedicated training (EverMemOS: ~92%), our training-free approach provides practitioners with an immediately-deployable alternative that significantly outperforms standard baselines. Code and data: https://github.com/tao-hpu/cog-canvas
翻译:对话摘要技术往往会丢失细微的细节:例如,在经过40轮对话后被问及编码偏好时,摘要可能只回忆起“使用类型提示”,却遗漏了关键的限制条件“在所有地方”(摘要方法的精确匹配率仅为19.0%,而我们的方法达到93.0%)。我们提出了CogCanvas,这是一个无需训练的框架,其灵感来源于团队如何使用白板来锚定共享记忆。CogCanvas并非压缩对话历史,而是提取基于逐字记录的“信息制品”(如决策、事实、提醒),并通过一个具备时间感知能力的图结构进行检索。在LoCoMo基准测试(包含ACL 2024版本的全部10个对话)上,CogCanvas在所有无需训练的方法中取得了最高的总体准确率(32.4%),优于RAG方法(24.6%)7.8个百分点,并在复杂推理任务上展现出决定性优势:在时序推理任务上领先20.6个百分点(32.7% vs. RAG的12.1%),在多跳问题上领先1.1个百分点(41.7% vs. RAG的40.6%)。CogCanvas在单跳检索任务上也处于领先(26.6% vs. RAG的24.6%)。消融研究表明,BGE重排序贡献了7.7个百分点的性能提升,是CogCanvas性能的最大贡献者。虽然经过深度优化的方法通过专门训练可以达到更高的绝对分数(例如EverMemOS:约92%),但我们这种无需训练的方法为实践者提供了一个可立即部署的替代方案,其性能显著优于标准基线方法。代码与数据:https://github.com/tao-hpu/cog-canvas