The need for large text corpora has increased with the advent of pretrained language models and, in particular, the discovery of scaling laws for these models. Most available corpora have sufficient data only for languages with large dominant communities. However, there is no corpus available that (i) covers a wide range of minority languages; (ii) is generated by an open-source reproducible pipeline; and (iii) is rigorously cleaned from noise, making it trustworthy to use. We present GlotCC, a clean, document-level, 2TB general domain corpus derived from CommonCrawl, covering more than 1000 languages. We make GlotCC and the system used to generate it - including the pipeline, language identification model, and filters - available to the research community. Corpus v. 1.0 https://huggingface.co/datasets/cis-lmu/GlotCC-v1, Pipeline v. 3.0 https://github.com/cisnlp/GlotCC.
翻译:随着预训练语言模型的出现,特别是针对此类模型的缩放定律的发现,对大规规模文本语料库的需求日益增长。现有语料库大多仅对拥有庞大主导社区的语言提供充分数据。然而,目前尚不存在同时满足以下条件的语料库:(i) 覆盖广泛的少数民族语言;(ii) 通过开源可复现流程生成;(iii) 经过严格噪声清洗,确保使用可靠性。本文提出GlotCC——一个从CommonCrawl数据中提取的、经过清洗的文档级2TB通用领域语料库,涵盖超过1000种语言。我们将GlotCC及其生成系统(包括处理流程、语言识别模型与过滤器)完整开放给研究社区。语料库v.1.0 https://huggingface.co/datasets/cis-lmu/GlotCC-v1,处理流程v.3.0 https://github.com/cisnlp/GlotCC。