A Membership Inference Attack (MIA) assesses how much a target machine learning model reveals about its training data by determining whether specific query instances were part of the training set. State-of-the-art MIAs rely on training hundreds of shadow models that are independent of the target model, leading to significant computational overhead. In this paper, we introduce Imitative Membership Inference Attack (IMIA), which employs a novel imitative training technique to strategically construct a small number of target-informed imitative models that closely replicate the target model's behavior for inference. Extensive experimental results demonstrate that IMIA substantially outperforms existing MIAs in various attack settings while only requiring less than 5% of the computational cost of state-of-the-art approaches.
翻译:成员推理攻击通过判断特定查询实例是否属于训练集,评估目标机器学习模型对其训练数据的泄露程度。现有最先进的成员推理攻击依赖于训练数百个独立于目标模型的影子模型,导致巨大的计算开销。本文提出模仿性成员推理攻击,该攻击采用新颖的模仿训练技术,通过策略性构建少量目标感知的模仿模型来精确复现目标模型的行为以进行推理。大量实验结果表明,在多种攻击场景下,模仿性成员推理攻击的性能显著优于现有方法,同时仅需消耗最先进方法不足5%的计算成本。