Large Language Models have demonstrated profound utility in the medical domain. However, their application to autonomous Electronic Health Records~(EHRs) navigation remains constrained by a reliance on curated inputs and simplified retrieval tasks. To bridge the gap between idealized experimental settings and realistic clinical environments, we present AgentEHR. This benchmark challenges agents to execute complex decision-making tasks, such as diagnosis and treatment planning, requiring long-range interactive reasoning directly within raw and high-noise databases. In tackling these tasks, we identify that existing summarization methods inevitably suffer from critical information loss and fractured reasoning continuity. To address this, we propose RetroSum, a novel framework that unifies a retrospective summarization mechanism with an evolving experience strategy. By dynamically re-evaluating interaction history, the retrospective mechanism prevents long-context information loss and ensures unbroken logical coherence. Additionally, the evolving strategy bridges the domain gap by retrieving accumulated experience from a memory bank. Extensive empirical evaluations demonstrate that RetroSum achieves performance gains of up to 29.16% over competitive baselines, while significantly decreasing total interaction errors by up to 92.3%.
翻译:大型语言模型在医疗领域已展现出深远应用价值。然而,其在自主电子健康记录导航中的应用仍受限于对人工筛选输入的依赖及简化的检索任务。为弥合理想化实验环境与真实临床场景之间的差距,我们提出了AgentEHR。该基准测试要求智能体在原始高噪声数据库中直接执行需长程交互推理的复杂决策任务(如诊断与治疗规划)。在应对这些任务时,我们发现现有摘要方法不可避免地存在关键信息丢失与推理连续性断裂的问题。为此,我们提出RetroSum——一种将回顾性摘要机制与动态经验策略相融合的创新框架。通过动态重估交互历史,回顾机制能防止长上下文信息丢失并确保逻辑连贯性。同时,动态策略通过从记忆库检索累积经验来弥合领域差距。大量实证评估表明,RetroSum相较竞争基线最高可获得29.16%的性能提升,同时将总体交互错误率显著降低达92.3%。