Recent advances in large language models (LLMs) enable agentic systems trained with reinforcement learning (RL) over multi-turn interaction trajectories, but practical deployment is bottlenecked by rapidly growing textual histories that inflate token budgets and memory usage. We introduce AgentOCR, a framework that exploits the superior information density of visual tokens by representing the accumulated observation-action history as a compact rendered image. To make multi-turn rollouts scalable, AgentOCR proposes segment optical caching. By decomposing history into hashable segments and maintaining a visual cache, this mechanism eliminates redundant re-rendering. Beyond fixed rendering, AgentOCR introduces agentic self-compression, where the agent actively emits a compression rate and is trained with compression-aware reward to adaptively balance task success and token efficiency. We conduct extensive experiments on challenging agentic benchmarks, ALFWorld and search-based QA. Remarkably, results demonstrate that AgentOCR preserves over 95\% of text-based agent performance while substantially reducing token consumption (>50\%), yielding consistent token and memory efficiency. Our further analysis validates a 20x rendering speedup from segment optical caching and the effective strategic balancing of self-compression.
翻译:近年来,大型语言模型(LLM)的进展使得能够通过强化学习(RL)在多轮交互轨迹上训练智能体系统,但实际部署受到快速增长的文本历史记录的制约,这些历史记录会扩大令牌预算和内存使用。我们提出了AgentOCR框架,该框架通过将累积的观察-行动历史表示为紧凑的渲染图像,从而利用视觉令牌更高的信息密度。为了使多轮推演具有可扩展性,AgentOCR提出了分段光学缓存机制。通过将历史分解为可哈希的片段并维护一个视觉缓存,该机制消除了冗余的重新渲染。除了固定渲染外,AgentOCR引入了智能体自压缩,其中智能体主动发出压缩率,并通过压缩感知奖励进行训练,以自适应地平衡任务成功率和令牌效率。我们在具有挑战性的智能体基准测试ALFWorld和基于搜索的问答任务上进行了广泛的实验。值得注意的是,结果表明,AgentOCR在显著降低令牌消耗(>50%)的同时,保留了基于文本的智能体95%以上的性能,实现了持续的令牌和内存效率提升。我们的进一步分析验证了分段光学缓存带来的20倍渲染加速,以及自压缩机制有效的策略平衡。