Large language models (LLMs) have been shown to memorize and reproduce content from their training data, raising significant privacy concerns, especially with web-scale datasets. Existing methods for detecting memorization are primarily sample-specific, relying on manually crafted or discretely optimized memory-inducing prompts generated on a per-sample basis, which become impractical for dataset-level detection due to the prohibitive computational cost of iterating through all samples. In real-world scenarios, data owners may need to verify whether a susceptible LLM has memorized their dataset, particularly if the LLM may have collected the data from the web without authorization. To address this, we introduce MemHunter, which trains a memory-inducing LLM and employs hypothesis testing to efficiently detect memorization at the dataset level, without requiring sample-specific memory inducing. Experiments on models like Pythia and Llama demonstrate that MemHunter can extract up to 40% more training data than existing methods under constrained time resources and reduce search time by up to 80% when integrated as a plug-in. Crucially, MemHunter is the first method capable of dataset-level memorization detection, providing a critical tool for assessing privacy risks in LLMs powered by large-scale datasets.
翻译:大型语言模型(LLM)已被证明会记忆并复现其训练数据中的内容,这引发了严重的隐私担忧,尤其是在使用网络规模数据集时。现有的记忆检测方法主要针对单个样本,依赖于人工构建或离散优化的记忆诱导提示,且需逐样本生成,这在数据集级别检测中因遍历所有样本的计算成本过高而变得不切实际。在实际场景中,数据所有者可能需要验证易受影响的LLM是否记忆了其数据集,特别是当LLM可能未经授权从网络收集了这些数据时。为解决此问题,我们提出了MemHunter,该方法通过训练记忆诱导LLM并采用假设检验,无需逐样本记忆诱导即可高效实现数据集级别的记忆检测。在Pythia和Llama等模型上的实验表明,在受限时间资源下,MemHunter能比现有方法多提取高达40%的训练数据,且作为插件集成时可将搜索时间减少高达80%。关键的是,MemHunter是首个能够实现数据集级别记忆检测的方法,为评估基于大规模数据集的LLM隐私风险提供了重要工具。