As the scale of training corpora for large language models (LLMs) grows, model developers become increasingly reluctant to disclose details on their data. This lack of transparency poses challenges to scientific evaluation and ethical deployment. Recently, pretraining data detection approaches, which infer whether a given text was part of an LLM's training data through black-box access, have been explored. The Min-K\% Prob method, which has achieved state-of-the-art results, assumes that a non-training example tends to contain a few outlier words with low token probabilities. However, the effectiveness may be limited as it tends to misclassify non-training texts that contain many common words with high probabilities predicted by LLMs. To address this issue, we introduce a divergence-based calibration method, inspired by the divergence-from-randomness concept, to calibrate token probabilities for pretraining data detection. We compute the cross-entropy (i.e., the divergence) between the token probability distribution and the token frequency distribution to derive a detection score. We have developed a Chinese-language benchmark, PatentMIA, to assess the performance of detection approaches for LLMs on Chinese text. Experimental results on English-language benchmarks and PatentMIA demonstrate that our proposed method significantly outperforms existing methods. Our code and PatentMIA benchmark are available at https://github.com/zhang-wei-chao/DC-PDD.
翻译:随着大型语言模型(LLMs)训练语料库规模的扩大,模型开发者越来越不愿披露其数据细节。这种透明度的缺失给科学评估与伦理部署带来了挑战。近期,研究者开始探索预训练数据检测方法——通过黑盒访问推断给定文本是否属于LLM训练数据的一部分。取得最先进成果的Min-K%概率方法假设非训练样本往往包含少量具有低标记概率的异常词,但其有效性可能受限,因为该方法容易将包含大量LLM预测高概率常见词的非训练文本误判为训练数据。为解决此问题,我们受"随机性偏离"概念启发,提出一种基于散度的校准方法,用于校准预训练数据检测中的标记概率。我们通过计算标记概率分布与标记频率分布之间的交叉熵(即散度)来推导检测分数。为评估LLM中文文本检测方法的性能,我们构建了中文基准数据集PatentMIA。在英文基准数据集和PatentMIA上的实验结果表明,我们提出的方法显著优于现有方法。相关代码与PatentMIA基准数据集已发布于https://github.com/zhang-wei-chao/DC-PDD。