Released Large Language Models (LLMs) are often paired with a claimed knowledge cutoff date, or the dates at which training data was gathered. Such information is crucial for applications where the LLM must provide up to date information. However, this statement only scratches the surface: do all resources in the training data share the same knowledge cutoff date? Does the model's demonstrated knowledge for these subsets closely align to their cutoff dates? In this work, we define the notion of an effective cutoff. This is distinct from the LLM designer reported cutoff and applies separately to sub-resources and topics. We propose a simple approach to estimate effective cutoffs on the resource-level temporal alignment of an LLM by probing across versions of the data. Using this analysis, we find that effective cutoffs often differ from reported cutoffs. To understand the root cause of this observation, we conduct a direct large-scale analysis on open pre-training datasets. Our analysis reveals two reasons for these inconsistencies: (1) temporal biases of CommonCrawl data due to non-trivial amounts of old data in new dumps and (2) complications in LLM deduplication schemes involving semantic duplicates and lexical near-duplicates. Overall, our results show that knowledge cutoffs are not as simple as they have seemed and that care must be taken both by LLM dataset curators as well as practitioners who seek to use information from these models.
翻译:发布的大型语言模型(LLMs)通常伴随一个宣称的知识截止日期,即训练数据收集的日期。这类信息对于需要LLM提供最新信息的应用至关重要。然而,这种声明仅触及表面:训练数据中的所有资源是否共享相同的知识截止日期?模型对这些子集表现出的知识是否与其截止日期紧密对齐?在本工作中,我们定义了“有效截止日期”这一概念。这与LLM设计者报告的截止日期不同,并分别适用于子资源和主题。我们提出了一种简单方法,通过探测数据的不同版本,来估计LLM在资源层面时间对齐上的有效截止日期。利用这一分析,我们发现有效截止日期通常与报告截止日期存在差异。为理解这一现象的根源,我们对开放预训练数据集进行了直接的大规模分析。分析揭示了这些不一致的两个原因:(1)CommonCrawl数据中的时间偏差,因为新快照中包含了非少量的旧数据;(2)LLM去重方案中的复杂性,涉及语义重复和词汇近似重复。总体而言,我们的结果表明,知识截止日期并非表面看起来那么简单,无论是LLM数据集的管理者,还是试图使用这些模型信息的实践者,都需要谨慎对待。