Large Language Models (LLMs) are increasingly evaluated on their ability to reason over structured data, yet such assessments often overlook a crucial confound: dataset contamination. In this work, we investigate whether LLMs exhibit prior knowledge of widely used tabular benchmarks such as Adult Income, Titanic, and others. Through a series of controlled probing experiments, we reveal that contamination effects emerge exclusively for datasets containing strong semantic cues-for instance, meaningful column names or interpretable value categories. In contrast, when such cues are removed or randomized, performance sharply declines to near-random levels. These findings suggest that LLMs' apparent competence on tabular reasoning tasks may, in part, reflect memorization of publicly available datasets rather than genuine generalization. We discuss implications for evaluation protocols and propose strategies to disentangle semantic leakage from authentic reasoning ability in future LLM assessments.
翻译:大型语言模型(LLMs)在结构化数据推理能力上的评估日益增多,然而此类评估常常忽略一个关键的混淆因素:数据集污染。本研究探讨了LLMs是否对广泛使用的表格基准数据集(如Adult Income、Titanic等)表现出先验知识。通过一系列受控探测实验,我们发现污染效应仅出现在包含强语义线索的数据集中——例如有意义的列名或可解释的数值类别。相反,当移除或随机化这些线索时,模型性能急剧下降至接近随机水平。这些发现表明,LLMs在表格推理任务上表现出的明显能力,可能部分反映了对公开可用数据集的记忆,而非真正的泛化能力。我们讨论了评估方案的影响,并为未来LLM评估中区分语义泄漏与真实推理能力提出了策略建议。