Reinforcement learning with verifiable rewards (RLVR) is central to training modern reasoning models, but the undisclosed training data raises concerns about benchmark contamination. Unlike pretraining methods, which optimize models using token-level probabilities, RLVR fine-tunes models based on reward feedback from self-generated reasoning trajectories, making conventional likelihood-based detection methods less effective. We show that RLVR induces a distinctive behavioral signature: prompts encountered during RLVR training result in more rigid and similar generations, while unseen prompts retain greater diversity. We introduce Min-$k$NN Distance, a simple black-box detector that quantifies this collapse by sampling multiple completions for a given prompt and computing the average of the $k$ smallest nearest-neighbor edit distances. Min-$k$NN Distance requires no access to the reference model or token probabilities. Experiments across multiple RLVR-trained reasoning models show that Min-$k$NN Distance reliably distinguishes RL-seen examples from unseen ones and outperforms existing membership inference and RL contamination detection baselines.
翻译:可验证奖励强化学习(RLVR)是训练现代推理模型的核心方法,但其未公开的训练数据引发了关于基准污染的担忧。与基于词元级概率优化模型的预训练方法不同,RLVR根据自生成推理轨迹的奖励反馈对模型进行微调,这使得传统的基于似然的检测方法效果有限。我们发现RLVR会诱发一种独特的行为特征:在RLVR训练中接触过的提示会导致更僵化且相似的生成结果,而未见的提示则保持更高的多样性。我们提出Min-$k$NN距离这一简单的黑盒检测器,通过对给定提示采样多个补全结果,并计算$k$个最小最近邻编辑距离的平均值来量化这种收敛现象。Min-$k$NN距离无需访问参考模型或词元概率。在多个RLVR训练的推理模型上的实验表明,Min-$k$NN距离能可靠区分RL训练所见示例与未见示例,其性能优于现有的成员推理与RL污染检测基线方法。