This paper presents WanJuan-CC, a safe and high-quality open-sourced English webtext dataset derived from Common Crawl data. The study addresses the challenges of constructing large-scale pre-training datasets for language models, which require vast amounts of high-quality data. A comprehensive process was designed to handle Common Crawl data, including extraction, heuristic rule filtering, fuzzy deduplication, content safety filtering, and data quality filtering. From approximately 68 billion original English documents, we obtained 2.22T Tokens of safe data and selected 1.0T Tokens of high-quality data as part of WanJuan-CC. We have open-sourced 100B Tokens from this dataset. The paper also provides statistical information related to data quality, enabling users to select appropriate data according to their needs. To evaluate the quality and utility of the dataset, we trained 1B-parameter and 3B-parameter models using WanJuan-CC and another dataset, RefinedWeb. Results show that WanJuan-CC performs better on validation datasets and downstream tasks.
翻译:本文介绍了WanJuan-CC,一个基于Common Crawl数据构建的安全、高质量开源英文网络文本数据集。本研究针对语言模型大规模预训练数据集构建中需要海量高质量数据的挑战,设计了一套完整的Common Crawl数据处理流程,包括数据提取、启发式规则过滤、模糊去重、内容安全过滤及数据质量过滤。我们从约680亿份原始英文文档中获取了2.22T Tokens的安全数据,并从中筛选出1.0T Tokens的高质量数据作为WanJuan-CC的组成部分。目前已开源该数据集中100B Tokens的数据。本文还提供了与数据质量相关的统计信息,供用户根据需求选择合适的数据。为评估数据集的质量与实用性,我们使用WanJuan-CC及另一数据集RefinedWeb分别训练了1B参数和3B参数的模型。结果表明,在验证数据集及下游任务中,WanJuan-CC均展现出更优性能。