Model collapse in synthetic data indicates that iterative training on self-generated data leads to a gradual decline in performance. With the proliferation of AI models, synthetic data will fundamentally reshape the web data ecosystem. Future GPT-$\{n\}$ models will inevitably be trained on a blend of synthetic and human-produced data. In this paper, we focus on two questions: what is the impact of synthetic data on language model training, and how to synthesize data without model collapse? We first pre-train language models across different proportions of synthetic data, revealing a negative correlation between the proportion of synthetic data and model performance. We further conduct statistical analysis on synthetic data to uncover distributional shift phenomenon and over-concentration of n-gram features. Inspired by the above findings, we propose token editing on human-produced data to obtain semi-synthetic data. As a proof of concept, we theoretically demonstrate that token-level editing can prevent model collapse, as the test error is constrained by a finite upper bound. We conduct extensive experiments on pre-training from scratch, continual pre-training, and supervised fine-tuning. The results validate our theoretical proof that token-level editing improves data quality and enhances model performance.
翻译:合成数据中的模型崩溃现象表明,在自生成数据上进行迭代训练会导致性能逐渐下降。随着人工智能模型的激增,合成数据将从根本上重塑网络数据生态系统。未来的GPT-$\{n\}$模型将不可避免地接受合成数据与人类生成数据的混合训练。本文聚焦于两个核心问题:合成数据对语言模型训练有何影响?以及如何合成数据以避免模型崩溃?我们首先在不同比例的合成数据上预训练语言模型,揭示了合成数据比例与模型性能之间的负相关关系。进一步对合成数据进行统计分析,发现了分布偏移现象以及n-gram特征的过度集中问题。基于上述发现,我们提出通过对人类生成数据进行词元编辑来获取半合成数据。作为概念验证,我们从理论上证明词元级编辑能够避免模型崩溃,因为测试误差被限制在有限上界内。我们在从头预训练、持续预训练和监督微调等场景下进行了大量实验。结果验证了我们的理论证明:词元级编辑能有效提升数据质量并增强模型性能。