Deep research is emerging as a representative long-horizon task for large language model (LLM) agents. However, long trajectories in deep research often exceed model context limits, compressing token budgets for both evidence collection and report writing, and preventing effective test-time scaling. We introduce FS-Researcher, a file-system-based, dual-agent framework that scales deep research beyond the context window via a persistent workspace. Specifically, a Context Builder agent acts as a librarian which browses the internet, writes structured notes, and archives raw sources into a hierarchical knowledge base that can grow far beyond context length. A Report Writer agent then composes the final report section by section, treating the knowledge base as the source of facts. In this framework, the file system serves as a durable external memory and a shared coordination medium across agents and sessions, enabling iterative refinement beyond the context window. Experiments on two open-ended benchmarks (DeepResearch Bench and DeepConsult) show that FS-Researcher achieves state-of-the-art report quality across different backbone models. Further analyses demonstrate a positive correlation between final report quality and the computation allocated to the Context Builder, validating effective test-time scaling under the file-system paradigm. The code and data are anonymously open-sourced at https://github.com/Ignoramus0817/FS-Researcher.
翻译:深度研究正成为大语言模型(LLM)智能体的代表性长视野任务。然而,深度研究中的长轨迹常超出模型上下文限制,压缩了证据收集与报告撰写的令牌预算,阻碍了有效的测试时扩展。本文提出FS-Researcher——一个基于文件系统的双智能体框架,通过持久化工作空间将深度研究扩展至上下文窗口之外。具体而言,上下文构建智能体作为“图书馆员”浏览互联网、撰写结构化笔记,并将原始资料归档至可远超上下文长度的分层知识库中。随后,报告撰写智能体以知识库为事实来源,分章节撰写最终报告。在此框架中,文件系统充当持久的外部存储器及跨智能体与会话的共享协调媒介,支持超越上下文窗口的迭代优化。在两个开放式基准测试(DeepResearch Bench与DeepConsult)上的实验表明,FS-Researcher在不同骨干模型中均实现了最优的报告质量。进一步分析显示,最终报告质量与分配给上下文构建智能体的计算资源呈正相关,验证了文件系统范式下测试时扩展的有效性。代码与数据已匿名开源于https://github.com/Ignoramus0817/FS-Researcher。