Researchers in empirical machine learning recently spotlighted their fears of so-called Model Collapse. They imagined a discard workflow, where an initial generative model is trained with real data, after which the real data are discarded, and subsequently, the model generates synthetic data on which a new model is trained. They came to the conclusion that models degenerate as model-fitting generations proceed. However, other researchers considered an augment workflow, where the original real data continue to be used in each generation of training, augmented by synthetic data from models fit in all earlier generations. Empirical results on canonical datasets and learning procedures confirmed the occurrence of model collapse under the discard workflow and avoidance of model collapse under the augment workflow. Under the augment workflow, theoretical evidence also confirmed avoidance in particular instances; specifically, Gerstgrasser et al. (2024) found that for classical Linear Regression, test risk at any later generation is bounded by a moderate multiple, viz. pi-squared-over-6 of the test risk of training with the original real data alone. Some commentators questioned the generality of theoretical conclusions based on the generative model assumed in Gerstgrasser et al. (2024): could similar conclusions be reached for other task/model pairings? In this work, we demonstrate the universality of the pi-squared-over-6 augment risk bound across a large family of canonical statistical models, offering key insights into exactly why collapse happens under the discard workflow and is avoided under the augment workflow. In the process, we provide a framework that is able to accommodate a large variety of workflows (beyond discard and augment), thereby enabling an experimenter to judge the comparative merits of multiple different workflows by simulating a simple Gaussian process.
翻译:近期,实证机器学习领域的研究人员对所谓的“模型崩溃”现象表达了担忧。他们设想了一种“丢弃式”工作流程:首先使用真实数据训练初始生成模型,随后丢弃真实数据,接着由该模型生成合成数据用于训练新一代模型。他们得出结论:随着模型拟合代际的推进,模型性能会出现退化。然而,另有研究者提出一种“增强式”工作流程:在每一代训练中持续使用原始真实数据,并辅以所有前代模型生成的合成数据进行增强。在经典数据集和学习方法上的实证结果证实了丢弃式工作流程下模型崩溃的发生,以及增强式工作流程下模型崩溃的避免。在增强式工作流程下,理论证据也证实了特定实例中崩溃的避免;具体而言,Gerstgrasser等人(2024)发现对于经典线性回归,任何后续代际的测试风险均被一个适度倍数(即π²/6)所限定,该倍数对应于仅使用原始真实数据训练时的测试风险。部分评论者对基于Gerstgrasser等人(2024)所假设生成模型得出的理论结论的普适性提出质疑:对于其他任务/模型组合是否也能得到类似结论?本研究证明了π²/6增强风险界限在一大类经典统计模型中的普适性,为解释丢弃式工作流程导致崩溃而增强式工作流程避免崩溃的内在机制提供了关键见解。在此过程中,我们提出了一个能够容纳多种工作流程(超越丢弃与增强)的理论框架,使实验者能够通过模拟简单高斯过程来评估不同工作流程的相对优劣。