Membership inference attack (MIA) poses a significant privacy threat in federated learning (FL) as it allows adversaries to determine whether a client's private dataset contains a specific data sample. While defenses against membership inference attacks in standard FL have been well studied, the recent shift toward federated fine-tuning has introduced new, largely unexplored attack surfaces. To highlight this vulnerability in the emerging FL paradigm, we demonstrate that federated prompt-tuning, which adapts pre-trained models with small input prefixes to improve efficiency, also exposes a new vector for privacy attacks. We propose PromptMIA, a membership inference attack tailored to federated prompt-tuning, in which a malicious server can insert adversarially crafted prompts and monitors their updates during collaborative training to accurately determine whether a target data point is in a client's private dataset. We formalize this threat as a security game and empirically show that PromptMIA consistently attains high advantage in this game across diverse benchmark datasets. Our theoretical analysis further establishes a lower bound on the attack's advantage which explains and supports the consistently high advantage observed in our empirical results. We also investigate the effectiveness of standard membership inference defenses originally developed for gradient or output based attacks and analyze their interaction with the distinct threat landscape posed by PromptMIA. The results highlight non-trivial challenges for current defenses and offer insights into their limitations, underscoring the need for defense strategies that are specifically tailored to prompt-tuning in federated settings.
翻译:成员推理攻击(MIA)在联邦学习(FL)中构成重大隐私威胁,它允许攻击者判断客户的私有数据集是否包含特定数据样本。虽然针对标准联邦学习中成员推理攻击的防御措施已得到充分研究,但近期向联邦微调的转变引入了新的、很大程度上未被探索的攻击面。为凸显这一新兴联邦学习范式中的脆弱性,我们证明,联邦提示调优——即通过微小的输入前缀来适配预训练模型以提高效率——同样暴露了隐私攻击的新途径。我们提出PromptMIA,一种专为联邦提示调优设计的成员推理攻击,其中恶意服务器可插入对抗性构造的提示,并在协作训练期间监控其更新,以准确判断目标数据点是否位于客户的私有数据集中。我们将此威胁形式化为一个安全博弈,并通过实验证明,PromptMIA在多种基准数据集上持续获得高优势。我们的理论分析进一步确立了攻击优势的下界,这解释并支持了我们在实验结果中观察到的一致高优势。我们还研究了最初为基于梯度或输出的攻击开发的标准成员推理防御措施的有效性,并分析了它们与PromptMIA所带来的独特威胁格局之间的相互作用。结果突显了当前防御措施面临的非平凡挑战,并揭示了其局限性,强调了需要专门针对联邦环境中的提示调优制定防御策略。