The prevalence and low cost of LLMs have led to a rise of synthetic content. From review sites to court documents, "natural" content has been contaminated by data points that appear similar to natural data, but are in fact LLM-generated. In this work we revisit fundamental learning theory questions in this, now ubiquitous, setting. We model this scenario as a sequence of learning tasks where the input is a mix of natural and synthetic data, and the learning algorithms are oblivious to the origin of any individual example. We study the possibilities and limitations of ERM in this setting. For the problem of estimating the mean of an arbitrary $d$-dimensional distribution, we find that while ERM converges to the true mean, it is outperformed by an algorithm that assigns non-uniform weights to examples from different generations of data. For the PAC learning setting, the disparity is even more stark. We find that ERM does not always converge to the true concept, echoing the model collapse literature. However, we show there are algorithms capable of learning the correct hypothesis for arbitrary VC classes and arbitrary amounts of contamination.
翻译:大型语言模型(LLM)的普及和低成本导致了合成内容的兴起。从评论网站到法庭文件,“自然”内容已被看似与自然数据相似、实则由LLM生成的数据点所污染。本研究重新探讨了在这一现已普遍存在的场景中的基础学习理论问题。我们将该场景建模为一系列学习任务,其中输入是自然数据与合成数据的混合,且学习算法无法识别任何单个样本的来源。我们在此背景下研究了经验风险最小化(ERM)的可能性与局限性。针对任意$d$维分布均值估计问题,我们发现虽然ERM会收敛至真实均值,但其性能逊色于一种对不同数据生成阶段的样本分配非均匀权重的算法。在PAC学习场景中,这种差异更为显著。我们发现ERM并不总是收敛至真实概念,这与模型崩溃文献的结论相呼应。然而,我们证明了存在能够针对任意VC类及任意污染程度学习正确假设的算法。