Processing structured tabular data, particularly large and lengthy tables, constitutes a fundamental yet challenging task for large language models (LLMs). However, existing long-context benchmarks like Needle-in-a-Haystack primarily focus on unstructured text, neglecting the challenge of diverse structured tables. Meanwhile, previous tabular benchmarks mainly consider downstream tasks that require high-level reasoning abilities, and overlook models' underlying fine-grained perception of individual table cells, which is crucial for practical and robust LLM-based table applications. To address this gap, we introduce \textsc{NeedleInATable} (NIAT), a new long-context tabular benchmark that treats each table cell as a ``needle'' and requires models to extract the target cell based on cell locations or lookup questions. Our comprehensive evaluation of various LLMs and multimodal LLMs reveals a substantial performance gap between popular downstream tabular tasks and the simpler NIAT task, suggesting that they may rely on dataset-specific correlations or shortcuts to obtain better benchmark results but lack truly robust long-context understanding towards structured tables. Furthermore, we demonstrate that using synthesized NIAT training data can effectively improve performance on both NIAT task and downstream tabular tasks, which validates the importance of NIAT capability for LLMs' genuine table understanding ability.
翻译:处理结构化表格数据,特别是大型长表格,是大型语言模型(LLMs)面临的一项基础性且具有挑战性的任务。然而,现有的长上下文基准测试(如Needle-in-a-Haystack)主要关注非结构化文本,忽略了多样化结构化表格带来的挑战。同时,以往的表格基准测试主要考虑需要高级推理能力的下游任务,而忽视了模型对单个表格单元格的底层细粒度感知能力,这对于构建实用且稳健的基于LLM的表格应用至关重要。为填补这一空白,我们引入了\\textsc{NeedleInATable}(NIAT),这是一个新的长上下文表格基准测试,它将每个表格单元格视为一根“针”,要求模型根据单元格位置或查找问题提取目标单元格。我们对各种LLM和多模态LLM进行的全面评估显示,在流行的下游表格任务与更简单的NIAT任务之间存在显著的性能差距,这表明它们可能依赖数据集特定的相关性或捷径来获得更好的基准测试结果,但缺乏对结构化表格真正稳健的长上下文理解。此外,我们证明使用合成的NIAT训练数据可以有效提升模型在NIAT任务和下游表格任务上的性能,这验证了NIAT能力对于LLM真正表格理解能力的重要性。