The prevalence and low cost of LLMs have led to a rise of synthetic content. From review sites to court documents, ``natural'' content has been contaminated by data points that appear similar to natural data, but are in fact LLM-generated. In this work we revisit fundamental learning theory questions in this, now ubiquitous, setting. We model this scenario as a sequence of learning tasks where the input is a mix of natural and synthetic data, and the learning algorithms are oblivious to the origin of any individual example. We study the possibilities and limitations of ERM in this setting. For the problem of estimating the mean of an arbitrary $d$-dimensional distribution, we find that while ERM converges to the true mean, it is outperformed by an algorithm that assigns non-uniform weights to examples from different generations of data. For the PAC learning setting, the disparity is even more stark. We find that ERM does not always converge to the true concept, echoing the model collapse literature. However, we show there are algorithms capable of learning the correct hypothesis for arbitrary VC classes and arbitrary amounts of contamination.
翻译:大型语言模型(LLM)的普及与低成本导致了合成内容的激增。从评论网站到法庭文件,“自然”内容已被看似与自然数据相似、实则由LLM生成的数据点所污染。在本研究中,我们重新审视这一现已普遍存在的场景中的基础学习理论问题。我们将该场景建模为一系列学习任务,其输入是自然数据与合成数据的混合体,且学习算法无法识别任何单个样本的来源。我们探究了在此设置下经验风险最小化(ERM)的可能性与局限性。针对任意$d$维分布的均值估计问题,我们发现虽然ERM能够收敛至真实均值,但其性能逊于一种对不同数据生成阶段的样本赋予非均匀权重的算法。在概率近似正确(PAC)学习框架中,这种差异更为显著。我们发现ERM并不总能收敛至真实概念,这与模型崩溃研究领域的结论相呼应。然而,我们证明了存在能够针对任意VC类及任意污染程度学习到正确假设的算法。