As the scale of training corpora for large language models (LLMs) grows, model developers become increasingly reluctant to disclose details on their data. This lack of transparency poses challenges to scientific evaluation and ethical deployment. Recently, pretraining data detection approaches, which infer whether a given text was part of an LLM's training data through black-box access, have been explored. The Min-K% Prob method, which has achieved state-of-the-art results, assumes that a non-training example tends to contain a few outlier words with low token probabilities. However, the effectiveness may be limited as it tends to misclassify non-training texts that contain many common words with high probabilities predicted by LLMs. To address this issue, we introduce a divergence-based calibration method, inspired by the divergence-from-randomness concept, to calibrate token probabilities for pretraining data detection. We compute the cross-entropy (i.e., the divergence) between the token probability distribution and the token frequency distribution to derive a detection score.We have developed a Chinese-language benchmark, PatentMIA, to assess the performance of detection approaches for LLMs on Chinese text. Experimental results on English-language benchmarks and PatentMIA demonstrate that our proposed method significantly outperforms existing methods.
翻译:随着大型语言模型(LLMs)训练语料库规模的扩大,模型开发者越来越不愿意披露其数据细节。这种透明度的缺乏给科学评估和伦理部署带来了挑战。近期,预训练数据检测方法——通过黑盒访问推断给定文本是否属于LLM训练数据的一部分——得到了探索。取得最先进结果的Min-K% Prob方法假设非训练样本倾向于包含少量具有较低标记概率的异常词。然而,其有效性可能受限,因为它容易误判那些包含许多由LLM预测的高概率常见词的非训练文本。为解决此问题,我们引入了一种基于散度的校准方法,该方法受随机性偏离概念的启发,旨在校准用于预训练数据检测的标记概率。我们通过计算标记概率分布与标记频率分布之间的交叉熵(即散度)来推导检测分数。我们开发了一个中文基准测试集PatentMIA,以评估LLM在中文文本上的检测方法性能。在英文基准测试集和PatentMIA上的实验结果表明,我们提出的方法显著优于现有方法。