Multilingual data from the web is essential for LLM pretraining. Yet, scraping it is expensive, and research groups repeatedly crawl the same content. For example, we found that over 40\% of tokens across major Arabic web corpora are duplicated between sources. In this work, we propose to use this wasteful redundancy as a quality signal to create high-quality pretraining datasets. Our key insight is that cross-source agreement functions as a free, model-free quality filter: content retained by multiple independent pipelines is more likely to represent high-quality text. Crucially, this signal requires no additional computation beyond standard deduplication, which is already performed at scale when pretraining language models. So, we propose MixMinMatch, a method that combines multiple existing web corpora, performs cross-dataset MinHash deduplication, and identifies documents independently recovered by multiple sources. We apply MixMinMatch to Arabic, Turkish, and Hindi, producing corpora that match or exceed the quality of the best single-source baselines, while providing up to 4$\times$ more unique tokens. On Arabic, our matched subset achieves a 4.5\% relative improvement over ArabicWeb24, while on Turkish, we improve over FineWeb-2 by 5.5\%. We release the datasets at: https://huggingface.co/collections/AdaMLLab/mixminmatch
翻译:来自网络的多语言数据对于大型语言模型预训练至关重要。然而,网络爬取成本高昂,研究团队往往重复抓取相同内容。例如,我们发现主要阿拉伯语网络语料库中超过40%的标记在不同来源之间存在重复。本工作中,我们提出利用这种浪费性的冗余作为质量信号来构建高质量预训练数据集。我们的核心洞见在于:跨源一致性可作为一种无需模型介入的免费质量过滤器——被多个独立数据管道共同保留的内容更可能代表高质量文本。关键在于,该信号除标准去重外无需额外计算,而标准去重本身已是语言模型预训练中的常规大规模操作。为此,我们提出MixMinMatch方法:该方法整合多个现有网络语料库,执行跨数据集最小哈希去重,并识别被多个来源独立恢复的文档。我们将MixMinMatch应用于阿拉伯语、土耳其语和印地语,构建的语料库在质量上达到或超越最佳单源基线,同时提供高达4倍的唯一标记数量。在阿拉伯语任务中,我们的匹配子集相较于ArabicWeb24实现了4.5%的相对提升;在土耳其语任务中,较FineWeb-2提升了5.5%。数据集已发布于:https://huggingface.co/collections/AdaMLLab/mixminmatch