Language models are prone to memorizing parts of their training data which makes them vulnerable to extraction attacks. Existing research often examines isolated setups--such as evaluating extraction risks from a single model or with a fixed prompt design. However, a real-world adversary could access models across various sizes and checkpoints, as well as exploit prompt sensitivity, resulting in a considerably larger attack surface than previously studied. In this paper, we revisit extraction attacks from an adversarial perspective, focusing on how to leverage the brittleness of language models and the multi-faceted access to the underlying data. We find significant churn in extraction trends, i.e., even unintuitive changes to the prompt, or targeting smaller models and earlier checkpoints, can extract distinct information. By combining information from multiple attacks, our adversary is able to increase the extraction risks by up to $2 \times$. Furthermore, even with mitigation strategies like data deduplication, we find the same escalation of extraction risks against a real-world adversary. We conclude with a set of case studies, including detecting pre-training data, copyright violations, and extracting personally identifiable information, showing how our more realistic adversary can outperform existing adversaries in the literature.
翻译:语言模型倾向于记忆其训练数据的部分内容,这使得它们容易受到提取攻击。现有研究通常考察孤立设置——例如评估单一模型的提取风险或采用固定的提示设计。然而,现实世界中的攻击者可能访问不同规模和检查点的模型,并利用提示敏感性,从而形成比以往研究更大的攻击面。本文从对抗性视角重新审视提取攻击,重点关注如何利用语言模型的脆弱性以及对底层数据的多方位访问。我们发现提取趋势存在显著波动,即即使对提示进行非直观的微小改动,或针对较小模型和较早检查点,也能提取出不同的信息。通过整合来自多次攻击的信息,我们的攻击者能够将提取风险提升高达 $2 \times$。此外,即使采用数据去重等缓解策略,我们发现在面对现实世界攻击者时,提取风险同样会升级。最后,我们通过一系列案例研究(包括检测预训练数据、版权违规和提取个人身份信息)表明,我们这种更贴近现实的攻击者能够超越文献中现有的攻击方法。