Recent statements about the impressive capabilities of large language models (LLMs) are usually supported by evaluating on open-access benchmarks. Considering the vast size and wide-ranging sources of LLMs' training data, it could explicitly or implicitly include test data, leading to LLMs being more susceptible to data contamination. However, due to the opacity of training data, the black-box access of models, and the rapid growth of synthetic training data, detecting and mitigating data contamination for LLMs faces significant challenges. In this paper, we propose CDD, which stands for Contamination Detection via output Distribution for LLMs. CDD necessitates only the sampled texts to detect data contamination, by identifying the peakedness of LLM's output distribution. To mitigate the impact of data contamination in evaluation, we also present TED: Trustworthy Evaluation via output Distribution, based on the correction of LLM's output distribution. To facilitate this study, we introduce two benchmarks, i.e., DetCon and ComiEval, for data contamination detection and contamination mitigation evaluation tasks. Extensive experimental results show that CDD achieves the average relative improvements of 21.8\%-30.2\% over other contamination detection approaches in terms of Accuracy, F1 Score, and AUC metrics, and can effectively detect implicit contamination. TED substantially mitigates performance improvements up to 66.9\% attributed to data contamination across various contamination setups. In real-world applications, we reveal that ChatGPT exhibits a high potential to suffer from data contamination on HumanEval benchmark.
翻译:近期关于大型语言模型(LLMs)卓越能力的论述通常基于对开放访问基准测试的评估结果。考虑到LLMs训练数据的庞大规模和广泛来源,其训练数据可能显式或隐式地包含测试数据,导致LLMs更容易受到数据污染的影响。然而,由于训练数据的不透明性、模型的黑箱访问特性以及合成训练数据的快速增长,针对LLMs的数据污染检测与缓解面临重大挑战。本文提出CDD方法(基于输出分布的数据污染检测),该方法仅需采样文本即可通过识别LLM输出分布的峰值特征来检测数据污染。为减轻数据污染对评估的影响,我们进一步提出TED方法(基于输出分布的可信评估),该方法通过对LLM输出分布进行校正来实现可靠评估。为支持本研究,我们构建了两个基准测试集DetCon和ComiEval,分别用于数据污染检测和污染缓解评估任务。大量实验结果表明:在准确率、F1分数和AUC指标上,CDD相较于其他污染检测方法平均获得21.8%-30.2%的相对提升,并能有效检测隐式污染;TED在不同污染场景下最高可缓解66.9%由数据污染带来的性能虚增。在实际应用分析中,我们揭示ChatGPT在HumanEval基准测试中存在较高的数据污染风险。