CDD, or Contamination Detection via output Distribution, identifies data contamination by measuring the peakedness of a model's sampled outputs. We study the conditions under which this approach succeeds and fails on small language models ranging from 70M to 410M parameters. Using controlled contamination experiments on GSM8K, HumanEval, and MATH, we find that CDD's effectiveness depends critically on whether fine-tuning produces verbatim memorization. With low-rank adaptation, models can learn from contaminated data without memorizing it, and CDD performs at chance level even when the data is verifiably contaminated. Only when fine-tuning capacity is sufficient to induce memorization does CDD recover strong detection accuracy. Our results characterize a memorization threshold that governs detectability and highlight a practical consideration: parameter-efficient fine-tuning can produce contamination that output-distribution methods do not detect. Our code is available at https://github.com/Sela-Omer/Contamination-Detection-Small-LM
翻译:CDD(基于输出分布的污染检测)通过测量模型采样输出的峰值程度来识别数据污染。本研究探讨了该方法在参数规模从7000万到4.1亿的小型语言模型上成功与失效的条件。通过在GSM8K、HumanEval和MATH数据集上的受控污染实验,我们发现CDD的有效性关键取决于微调是否产生逐字记忆。采用低秩自适应方法时,模型能够从污染数据中学习而不进行记忆,此时即使数据被证实受到污染,CDD的检测效果也仅处于随机水平。仅当微调能力足以诱发记忆时,CDD才能恢复较强的检测准确率。我们的结果揭示了控制可检测性的记忆阈值,并强调了一个实际考量:参数高效微调可能产生输出分布方法无法检测的污染。代码发布于https://github.com/Sela-Omer/Contamination-Detection-Small-LM