Current techniques for privacy auditing of large language models (LLMs) have limited efficacy -- they rely on basic approaches to generate canaries which leads to weak membership inference attacks that in turn give loose lower bounds on the empirical privacy leakage. We develop canaries that are far more effective than those used in prior work under threat models that cover a range of realistic settings. We demonstrate through extensive experiments on multiple families of fine-tuned LLMs that our approach sets a new standard for detection of privacy leakage. For measuring the memorization rate of non-privately trained LLMs, our designed canaries surpass prior approaches. For example, on the Qwen2.5-0.5B model, our designed canaries achieve $49.6\%$ TPR at $1\%$ FPR, vastly surpassing the prior approach's $4.2\%$ TPR at $1\%$ FPR. Our method can be used to provide a privacy audit of $\varepsilon \approx 1$ for a model trained with theoretical $\varepsilon$ of 4. To the best of our knowledge, this is the first time that a privacy audit of LLM training has achieved nontrivial auditing success in the setting where the attacker cannot train shadow models, insert gradient canaries, or access the model at every iteration.
翻译:当前针对大型语言模型(LLMs)的隐私审计技术效能有限——它们依赖生成测试样本的基础方法,这导致成员推理攻击能力薄弱,进而对经验性隐私泄露给出宽松的下界估计。我们开发了在覆盖多种现实场景的威胁模型下,远比先前工作所用方法更为有效的测试样本。通过对多个微调LLM系列的大量实验,我们证明本方法为隐私泄露检测设立了新标准。在测量非隐私训练LLMs的记忆率方面,我们设计的测试样本超越了现有方法。例如,在Qwen2.5-0.5B模型上,我们设计的测试样本在1%误报率下实现了49.6%的真阳性率,远超先前方法在相同误报率下4.2%的真阳性率。本方法可用于对理论隐私预算ε=4的训练模型提供约ε≈1的隐私审计。据我们所知,这是在攻击者无法训练影子模型、插入梯度测试样本或访问每次迭代模型参数的场景下,首次实现具有实质意义的LLM训练隐私审计。