Large text corpora are the backbone of language models. However, we have a limited understanding of the content of these corpora, including general statistics, quality, social factors, and inclusion of evaluation data (contamination). In this work, we propose What's In My Big Data? (WIMBD), a platform and a set of sixteen analyses that allow us to reveal and compare the contents of large text corpora. WIMBD builds on two basic capabilities -- count and search -- at scale, which allows us to analyze more than 35 terabytes on a standard compute node. We apply WIMBD to ten different corpora used to train popular language models, including C4, The Pile, and RedPajama. Our analysis uncovers several surprising and previously undocumented findings about these corpora, including the high prevalence of duplicate, synthetic, and low-quality content, personally identifiable information, toxic language, and benchmark contamination. For instance, we find that about 50% of the documents in RedPajama and LAION-2B-en are duplicates. In addition, several datasets used for benchmarking models trained on such corpora are contaminated with respect to important benchmarks, including the Winograd Schema Challenge and parts of GLUE and SuperGLUE. We open-source WIMBD's code and artifacts to provide a standard set of evaluations for new text-based corpora and to encourage more analyses and transparency around them.
翻译:大型文本语料库是语言模型的基石。然而,我们对这些语料库内容的理解仍十分有限,包括其总体统计特征、质量、社会因素以及评估数据的包含情况(即污染问题)。本文提出“我的大数据里有什么?”(WIMBD),一个包含十六种分析方法的平台,旨在揭示并比较大型文本语料库的内容。WIMBD基于两种规模化基础能力——计数与搜索,从而能够在标准计算节点上分析超过35TB的数据。我们将WIMBD应用于十种用于训练主流语言模型的不同语料库,包括C4、The Pile和RedPajama。分析揭示出这些语料库中若干令人惊讶且此前未见记录的发现,包括大量重复、合成及低质量内容、个人身份信息、有害语言以及基准污染。例如,我们发现RedPajama和LAION-2B-en中约50%的文档存在重复。此外,用于评估基于此类语料库训练模型的多个数据集在重要基准(如Winograd模式挑战赛、GLUE和SuperGLUE的部分任务)上存在污染。我们开源了WIMBD的代码及成果,旨在为新型文本语料库提供一套标准化评估方案,并推动其相关分析与透明度提升。