We introduce Lifelong ICL, a problem setting that challenges long-context language models (LMs) to learn a sequence of language tasks through in-context learning (ICL). We further introduce Task Haystack, an evaluation suite dedicated to assessing and diagnosing how long-context LMs utilizes contexts in Lifelong ICL. When given a task instruction and test inputs, long-context LMs are expected to leverage the relevant demonstrations in the Lifelong ICL prompt, avoid distraction and interference from other tasks, and achieve test accuracies that are not significantly worse than those of the Single-task ICL baseline. Task Haystack draws inspiration from the widely-adopted "needle-in-a-haystack" (NIAH) evaluation, but presents distinct new challenges. It requires models (1) to utilize the contexts at a deeper level, rather than resorting to simple copying and pasting; (2) to navigate through long streams of evolving topics and tasks, proxying the complexities and dynamism of contexts in real-world scenarios. Additionally, Task Haystack inherits the controllability of NIAH, providing model developers with tools and visualizations to identify model vulnerabilities effectively. We benchmark 14 long-context LMs using Task Haystack, finding that frontier models like GPT-4o still struggle with the setting, failing on 15% of cases on average. Most open-weight models further lack behind by a large margin, with failure rates reaching up to 61%. In our controlled analysis, we identify factors such as distraction and recency bias as contributors to these failure cases. Further, performance declines when task instructions are paraphrased at test time or when ICL demonstrations are repeated excessively, raising concerns about the robustness, instruction understanding, and true context utilization of long-context LMs.
翻译:本文提出终身上下文学习(Lifelong ICL)这一新型问题设定,旨在挑战长上下文语言模型通过上下文学习连续掌握序列化语言任务的能力。我们进一步提出任务干草堆(Task Haystack)评估框架,专门用于系统评估和诊断长上下文模型在终身上下文学习场景中利用上下文信息的机制。当给定任务指令和测试输入时,长上下文模型需要有效利用终身上下文学习提示中的相关示例,同时规避其他任务带来的干扰与冲突,其测试准确率不应显著低于单任务上下文学习的基线水平。任务干草堆设计灵感来源于广泛采用的“干草堆寻针”评估范式,但提出了两个本质性新挑战:要求模型(1)实现超越简单复制粘贴的深层上下文理解;(2)在持续演变的主题与任务流中进行精准导航,从而模拟现实场景中语境的复杂性与动态性。该框架继承了干草堆寻针范式的高度可控性,为模型开发者提供可视化工具以有效识别模型缺陷。通过对14个长上下文模型的基准测试,我们发现包括GPT-4o在内的前沿模型仍在该设定中表现不佳,平均失败率达15%。多数开源模型表现差距更为显著,最高失败率可达61%。控制变量分析表明,注意力分散与近因偏差是导致失败的重要因素。当测试时任务指令被转述或上下文学习示例过度重复时,模型性能出现显著下降,这引发了对长上下文模型鲁棒性、指令理解能力及真实上下文利用水平的深切关注。