As the scale of training corpora for large language models (LLMs) grows, model developers become increasingly reluctant to disclose details on their data. This lack of transparency poses challenges to scientific evaluation and ethical deployment. Recently, pretraining data detection approaches, which infer whether a given text was part of an LLM's training data through black-box access, have been explored. The Min-K\% Prob method, which has achieved state-of-the-art results, assumes that a non-training example tends to contain a few outlier words with low token probabilities. However, the effectiveness may be limited as it tends to misclassify non-training texts that contain many common words with high probabilities predicted by LLMs. To address this issue, we introduce a divergence-based calibration method, inspired by the divergence-from-randomness concept, to calibrate token probabilities for pretraining data detection. We compute the cross-entropy (i.e., the divergence) between the token probability distribution and the token frequency distribution to derive a detection score. We have developed a Chinese-language benchmark, PatentMIA, to assess the performance of detection approaches for LLMs on Chinese text. Experimental results on English-language benchmarks and PatentMIA demonstrate that our proposed method significantly outperforms existing methods. Our code and PatentMIA benchmark are available at \url{https://github.com/zhang-wei-chao/DC-PDD}.
翻译:随着大型语言模型(LLM)训练语料库规模的扩大,模型开发者越来越不愿意披露其数据细节。这种透明度的缺乏给科学评估和伦理部署带来了挑战。最近,研究者开始探索预训练数据检测方法,该方法通过黑盒访问推断给定文本是否属于LLM的训练数据。取得最先进成果的Min-K% Prob方法假设非训练样本往往包含少数具有低标记概率的异常词,但其有效性可能受限,因为该方法容易误判那些包含大量由LLM预测的高概率常见词的非训练文本。为解决这一问题,我们受“随机性偏离”概念的启发,提出一种基于散度的校准方法,用于校准预训练数据检测中的标记概率。我们通过计算标记概率分布与标记频率分布之间的交叉熵(即散度)来推导检测分数。我们开发了中文基准数据集PatentMIA,用于评估LLM在中文文本上的检测方法性能。在英文基准数据集和PatentMIA上的实验结果表明,我们提出的方法显著优于现有方法。我们的代码和PatentMIA基准数据集可在\url{https://github.com/zhang-wei-chao/DC-PDD}获取。