Membership inference attacks (MIAs) pose a serious threat to the privacy of machine learning models by allowing adversaries to determine whether a specific data sample was included in the training set. Although federated learning (FL) is widely regarded as a privacy-aware training paradigm due to its decentralized nature, recent evidence shows that the final global model can still leak sensitive membership information through black-box access. In this paper, we introduce Res-MIA, a novel training-free and black-box membership inference attack that exploits the sensitivity of deep models to high-frequency input details. Res-MIA progressively degrades the input resolution using controlled downsampling and restoration operations, and analyzes the resulting confidence decay in the model's predictions. Our key insight is that training samples exhibit a significantly steeper confidence decline under resolution erosion compared to non-member samples, revealing a robust membership signal. Res-MIA requires no shadow models, no auxiliary data, and only a limited number of forward queries to the target model. We evaluate the proposed attack on a federated ResNet-18 trained on CIFAR-10, where it consistently outperforms existing training-free baselines and achieves an AUC of up to 0.88 with minimal computational overhead. These findings highlight frequency-sensitive overfitting as an important and previously underexplored source of privacy leakage in federated learning, and emphasize the need for privacy-aware model designs that reduce reliance on fine-grained, non-robust input features.
翻译:成员推理攻击(MIAs)通过使攻击者能够判断特定数据样本是否包含在训练集中,对机器学习模型的隐私构成严重威胁。尽管联邦学习(FL)因其去中心化的特性被广泛视为一种隐私感知的训练范式,但近期证据表明,最终的全局模型仍可能通过黑盒访问泄露敏感的成员信息。本文提出Res-MIA,一种新颖的免训练黑盒成员推理攻击方法,该方法利用深度模型对输入高频细节的敏感性。Res-MIA通过受控的下采样与恢复操作逐步降低输入分辨率,并分析模型预测中随之产生的置信度衰减。我们的核心发现是:与非成员样本相比,训练样本在分辨率侵蚀下表现出显著更陡峭的置信度下降,从而揭示出一种鲁棒的成员信号。Res-MIA无需影子模型、无需辅助数据,且仅需对目标模型进行有限次数的前向查询。我们在基于CIFAR-10训练的联邦ResNet-18模型上评估了所提出的攻击方法,结果表明其性能持续优于现有的免训练基线方法,在极低计算开销下实现了高达0.88的AUC值。这些发现揭示了频率敏感过拟合是联邦学习中一个重要且此前未被充分探索的隐私泄露来源,并强调了需要设计隐私感知模型以减少对细粒度、非鲁棒输入特征的依赖。