Data quality is crucial for training Large Language Models (LLMs). Traditional heuristic filters often miss low-quality text or mistakenly remove valuable content. In this paper, we introduce an LLM-based line-level filtering method to enhance training data quality. We use GPT-4o mini to label a 20,000-document sample from FineWeb at the line level, allowing the model to create descriptive labels for low-quality lines. These labels are grouped into nine main categories, and we train a DeBERTa-v3 classifier to scale the filtering to a 10B-token subset of FineWeb. To test the impact of our filtering, we train GPT-2 models on both the original and the filtered datasets. The results show that models trained on the filtered data achieve higher accuracy on the HellaSwag benchmark and reach their performance targets faster, even with up to 25\% less data. This demonstrates that LLM-based line-level filtering can significantly improve data quality and training efficiency for LLMs. We release our quality-annotated dataset, FinerWeb-10BT, and the codebase to support further work in this area.
翻译:数据质量对于训练大型语言模型(LLM)至关重要。传统的启发式过滤方法常常遗漏低质量文本或错误地移除有价值内容。本文提出一种基于LLM的行级过滤方法以提升训练数据质量。我们使用GPT-4o mini对FineWeb中20,000份文档样本进行行级标注,使模型能够为低质量文本行生成描述性标签。这些标签被归纳为九个主要类别,并训练了一个DeBERTa-v3分类器,将过滤流程扩展到FineWeb的100亿词元子集。为验证过滤效果,我们在原始数据集和过滤后数据集上分别训练了GPT-2模型。实验结果表明,使用过滤数据训练的模型在HellaSwag基准测试中取得了更高准确率,且能以更少训练步数达到性能目标——即使数据量减少高达25%。这证明基于LLM的行级过滤能显著提升LLM的数据质量与训练效率。我们开源了质量标注数据集FinerWeb-10BT及相关代码库,以支持该领域的后续研究。