Large Language Models (LLMs) have the promise to revolutionize computing broadly, but their complexity and extensive training data also expose significant privacy vulnerabilities. One of the simplest privacy risks associated with LLMs is their susceptibility to membership inference attacks (MIAs), wherein an adversary aims to determine whether a specific data point was part of the model's training set. Although this is a known risk, state of the art methodologies for MIAs rely on training multiple computationally costly shadow models, making risk evaluation prohibitive for large models. Here we adapt a recent line of work which uses quantile regression to mount membership inference attacks; we extend this work by proposing a low-cost MIA that leverages an ensemble of small quantile regression models to determine if a document belongs to the model's training set or not. We demonstrate the effectiveness of this approach on fine-tuned LLMs of varying families (OPT, Pythia, Llama) and across multiple datasets. Across all scenarios we obtain comparable or improved accuracy compared to state of the art shadow model approaches, with as little as 6% of their computation budget. We demonstrate increased effectiveness across multi-epoch trained target models, and architecture miss-specification robustness, that is, we can mount an effective attack against a model using a different tokenizer and architecture, without requiring knowledge on the target model.
翻译:大语言模型(LLMs)有望广泛变革计算领域,但其复杂性和海量训练数据也暴露了重大的隐私漏洞。与LLMs相关的最简单隐私风险之一是其对成员推理攻击的易感性,即攻击者试图判断特定数据点是否属于模型的训练集。尽管这是一种已知风险,但最先进的成员推理攻击方法依赖于训练多个计算成本高昂的影子模型,使得针对大型模型的风险评估难以实施。本文基于近期利用分位数回归进行成员推理攻击的研究路线,通过提出一种低成本的成员推理攻击方法扩展了该工作,该方法利用小型分位数回归模型集成来判断文档是否属于模型训练集。我们在不同系列(OPT、Pythia、Llama)的微调LLMs及多个数据集上验证了该方法的有效性。在所有实验场景中,相较于最先进的影子模型方法,我们仅需其6%的计算预算即可获得相当或更优的准确率。我们证明了该方法在多轮训练目标模型上具有更强的有效性,并具备架构误配鲁棒性——即能够在未知目标模型分词器和架构的情况下,使用不同配置成功实施攻击。