With context windows of millions of tokens, Long-Context Language Models (LCLMs) can encode entire document collections, offering a strong alternative to conventional retrieval-augmented generation (RAG). However, it remains unclear whether fine-tuning strategies can improve long-context performance and translate to greater robustness under KV-cache compression techniques. In this work, we investigate which training strategies most effectively enhance LCLMs' ability to identify and use relevant information, as well as enhancing their robustness under KV-cache compression. Our experiments show substantial in-domain improvements, achieving gains of up to +20 points over the base model. However, out-of-domain generalization remains task dependent with large variance -- LCLMs excels on finance questions (+9 points), while RAG shows stronger performance on multiple-choice questions (+6 points) over the baseline models. Finally, we show that our fine-tuning approaches bring moderate improvements in robustness under KV-cache compression, with gains varying across tasks.
翻译:随着上下文窗口扩展至数百万个标记,长上下文语言模型(LCLMs)能够编码整个文档集合,为传统的检索增强生成(RAG)提供了强有力的替代方案。然而,目前尚不清楚微调策略是否能提升长上下文性能,并在KV缓存压缩技术下转化为更强的鲁棒性。本研究探讨了哪些训练策略能最有效地增强LCLMs识别与利用相关信息的能力,同时提升其在KV缓存压缩下的鲁棒性。实验结果表明,在领域内任务中取得了显著改进,较基础模型实现了最高达+20分的性能提升。然而,领域外泛化能力仍高度依赖于具体任务且存在较大方差——LCLMs在金融类问题上表现优异(+9分),而RAG在多项选择题上较基线模型展现出更强性能(+6分)。最后,我们证明所提出的微调方法能在KV缓存压缩下带来适度的鲁棒性提升,其增益效果因任务类型而异。