If we cannot inspect the training data of a large language model (LLM), how can we ever know what it has seen? We believe the most compelling evidence arises when the model itself freely reproduces the target content. As such, we propose RECAP, an agentic pipeline designed to elicit and verify memorized training data from LLM outputs. At the heart of RECAP is a feedback-driven loop, where an initial extraction attempt is evaluated by a secondary language model, which compares the output against a reference passage and identifies discrepancies. These are then translated into minimal correction hints, which are fed back into the target model to guide subsequent generations. In addition, to address alignment-induced refusals, RECAP includes a jailbreaking module that detects and overcomes such barriers. We evaluate RECAP on EchoTrace, a new benchmark spanning over 30 full books, and the results show that RECAP leads to substantial gains over single-iteration approaches. For instance, with GPT-4.1, the average ROUGE-L score for the copyrighted text extraction improved from 0.38 to 0.47 - a nearly 24% increase.
翻译:如果我们无法检查大语言模型的训练数据,又怎能知晓其学习过哪些内容?我们认为,当模型本身能够自由复现目标内容时,将产生最具说服力的证据。为此,我们提出RECAP——一个旨在从大语言模型输出中激发并验证已记忆训练数据的智能体流程。RECAP的核心是反馈驱动循环:初始提取尝试由次级语言模型进行评估,该模型将输出与参考段落进行比对并识别差异。这些差异随后被转化为最小化的修正提示,反馈至目标模型以指导后续生成。此外,为应对对齐机制导致的拒绝响应,RECAP还包含越狱模块,用于检测并突破此类障碍。我们在EchoTrace基准测试(涵盖超过30本完整著作)上评估RECAP,结果表明相较于单次迭代方法,RECAP带来显著提升。例如使用GPT-4.1时,受版权文本提取的平均ROUGE-L分数从0.38提升至0.47——增幅接近24%。