Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases. Leveraging LCLMs' ability to natively ingest and process entire corpora of information offers numerous advantages. It enhances user-friendliness by eliminating the need for specialized knowledge of tools, provides robust end-to-end modeling that minimizes cascading errors in complex pipelines, and allows for the application of sophisticated prompting techniques across the entire system. To assess this paradigm shift, we introduce LOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs' performance on in-context retrieval and reasoning. Our findings reveal LCLMs' surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks. However, LCLMs still face challenges in areas like compositional reasoning that are required in SQL-like tasks. Notably, prompting strategies significantly influence performance, emphasizing the need for continued research as context lengths grow. Overall, LOFT provides a rigorous testing ground for LCLMs, showcasing their potential to supplant existing paradigms and tackle novel tasks as model capabilities scale.
翻译:长上下文语言模型(LCLMs)有潜力彻底改变我们处理传统上依赖检索系统或数据库等外部工具的任务的方式。利用LCLMs原生摄入和处理整个信息语料库的能力具有多重优势:它通过消除对工具专业知识的依赖提升了用户友好性,提供了强大的端到端建模以减少复杂流程中的级联错误,并允许在整个系统中应用复杂的提示技术。为评估这一范式转变,我们提出了LOFT基准测试——一套包含需要数百万token上下文的真实世界任务,旨在评估LCLMs在上下文检索与推理方面的性能。研究发现,尽管从未接受过针对这些任务的显式训练,LCLMs展现出与最先进检索及RAG系统相媲美的惊人能力。然而,在类SQL任务所需的组合推理等领域,LCLMs仍面临挑战。值得注意的是,提示策略对性能有显著影响,这强调了随着上下文长度增长持续研究的必要性。总体而言,LOFT为LCLMs提供了严谨的测试平台,展示了其随着模型能力扩展而取代现有范式并处理新型任务的潜力。