Synthesized data from generative models is increasingly considered as an alternative to human-annotated data for fine-tuning Large Language Models. This raises concerns about model collapse: a drop in performance of models fine-tuned on generated data. Considering that it is easier for both humans and machines to tell between good and bad examples than to generate high-quality samples, we investigate the use of feedback on synthesized data to prevent model collapse. We derive theoretical conditions under which a Gaussian mixture classification model can achieve asymptotically optimal performance when trained on feedback-augmented synthesized data, and provide supporting simulations for finite regimes. We illustrate our theoretical predictions on two practical problems: computing matrix eigenvalues with transformers and news summarization with large language models, which both undergo model collapse when trained on model-generated data. We show that training from feedback-augmented synthesized data, either by pruning incorrect predictions or by selecting the best of several guesses, can prevent model collapse, validating popular approaches like RLHF.
翻译:利用生成模型产生的合成数据作为人类标注数据的替代方案,正日益被视为微调大型语言模型的一种途径。这引发了关于模型坍缩的担忧:即在生成数据上微调的模型会出现性能下降。考虑到人类和机器区分优质与劣质样本通常比生成高质量样本更为容易,我们研究了利用对合成数据的反馈来防止模型坍缩的方法。我们推导了高斯混合分类模型在基于反馈增强的合成数据训练时能够达到渐近最优性能的理论条件,并提供了有限样本情况下的仿真支持。我们在两个实际问题中验证了理论预测:使用Transformer计算矩阵特征值以及使用大型语言模型进行新闻摘要。这两个任务在仅使用模型生成的数据训练时均会出现模型坍缩。研究表明,通过反馈增强的合成数据进行训练——无论是通过剪除错误预测,还是通过从多个猜测中选择最佳结果——都能有效防止模型坍缩,这为RLHF等流行方法提供了理论验证。