We study allowing large language models (LLMs) to process arbitrarily long prompts through the lens of inference-time scaling. We propose Recursive Language Models (RLMs), a general inference paradigm that treats long prompts as part of an external environment and allows the LLM to programmatically examine, decompose, and recursively call itself over snippets of the prompt. We find that RLMs can successfully process inputs up to two orders of magnitude beyond model context windows and, even for shorter prompts, dramatically outperform the quality of vanilla frontier LLMs and common long-context scaffolds across four diverse long-context tasks while having comparable cost. At a small scale, we post-train the first natively recursive language model. Our model, RLM-Qwen3-8B, outperforms the underlying Qwen3-8B model by $28.3\%$ on average and even approaches the quality of vanilla GPT-5 on three long-context tasks. Code is available at https://github.com/alexzhang13/rlm.
翻译:本研究从推理时扩展的视角出发,探讨如何使大型语言模型(LLMs)能够处理任意长度的提示。我们提出递归语言模型(RLMs),这是一种通用的推理范式,它将长提示视为外部环境的一部分,并允许LLM以编程方式检查、分解提示片段,并递归地调用自身处理这些片段。我们发现,RLMs能够成功处理超出模型上下文窗口两个数量级的输入;即使在处理较短提示时,在四项不同的长上下文任务中,其性能也显著优于前沿的普通LLMs及常见的长上下文框架,同时保持可比的成本。在小规模实验中,我们通过后训练得到了首个原生递归语言模型。我们的模型RLM-Qwen3-8B在三个长上下文任务上的平均性能比基础模型Qwen3-8B提升了$28.3\%$,甚至接近普通GPT-5的质量。代码发布于https://github.com/alexzhang13/rlm。