The problem of pre-training data detection for large language models (LLMs) has received growing attention due to its implications in critical issues like copyright violation and test data contamination. Despite improved performance, existing methods (including the state-of-the-art, Min-K%) are mostly developed upon simple heuristics and lack solid, reasonable foundations. In this work, we propose a novel and theoretically motivated methodology for pre-training data detection, named Min-K%++. Specifically, we present a key insight that training samples tend to be local maxima of the modeled distribution along each input dimension through maximum likelihood training, which in turn allow us to insightfully translate the problem into identification of local maxima. Then, we design our method accordingly that works under the discrete distribution modeled by LLMs, whose core idea is to determine whether the input forms a mode or has relatively high probability under the conditional categorical distribution. Empirically, the proposed method achieves new SOTA performance across multiple settings. On the WikiMIA benchmark, Min-K%++ outperforms the runner-up by 6.2% to 10.5% in detection AUROC averaged over five models. On the more challenging MIMIR benchmark, it consistently improves upon reference-free methods while performing on par with reference-based method that requires an extra reference model.
翻译:大型语言模型(LLM)的预训练数据检测问题因其在版权侵权和测试数据污染等关键问题中的影响而日益受到关注。尽管现有方法(包括最先进的Min-K%)性能有所提升,但它们大多基于简单的启发式规则,缺乏坚实合理的理论基础。本文提出了一种新颖且具有理论动机的预训练数据检测方法,命名为Min-K%++。具体而言,我们提出了一个关键见解:通过最大似然训练,训练样本倾向于成为建模分布沿每个输入维度的局部最大值,这使我们能够深刻地将该问题转化为局部最大值的识别。随后,我们据此设计了适用于LLM建模的离散分布的方法,其核心思想是判定输入在条件分类分布下是否构成众数或具有相对较高的概率。实证结果表明,所提方法在多种设置下均达到了新的最先进性能。在WikiMIA基准测试中,Min-K%++在五个模型上的平均检测AUROC比第二名高出6.2%至10.5%。在更具挑战性的MIMIR基准测试中,该方法持续改进了无参考方法,同时与需要额外参考模型的有参考方法性能相当。