Model inversion (MI) attacks aim to infer or reconstruct the training dataset through reverse-engineering from the target model's weights. Recently, significant advancements in generative models have enabled MI attacks to overcome challenges in producing photo-realistic replicas of the training dataset, a technique known as generative MI. The generative MI primarily focuses on identifying latent vectors that correspond to specific target labels, leveraging a generative model trained with an auxiliary dataset. However, an important aspect is often overlooked: the MI attacks fail if the pre-trained generative model lacks the coverage to create an image corresponding to the target label, especially when there is a significant difference between the target and auxiliary datasets. To address this gap, we propose the Patch-MI method, inspired by a jigsaw puzzle, which offers a novel probabilistic interpretation of MI attacks. Even with a dissimilar auxiliary dataset, our method effectively creates images that closely mimic the distribution of image patches in the target dataset by patch-based reconstruction. Moreover, we numerically demonstrate that the Patch-MI improves Top 1 attack accuracy by 5\%p compared to existing methods.
翻译:模型反演(MI)攻击旨在通过从目标模型的权重进行逆向工程,推断或重建训练数据集。近年来,生成模型的显著进展使得MI攻击能够克服生成训练数据集的照片级真实感副本的挑战,这一技术被称为生成式MI。生成式MI主要侧重于识别与特定目标标签对应的潜在向量,并利用在辅助数据集上训练的生成模型。然而,一个重要方面常被忽视:如果预训练的生成模型缺乏覆盖范围以创建与目标标签对应的图像,尤其是在目标数据集与辅助数据集存在显著差异时,MI攻击便会失败。为弥补这一不足,我们提出了受拼图游戏启发的Patch-MI方法,为MI攻击提供了一种新颖的概率解释。即使使用差异较大的辅助数据集,我们的方法也能通过基于分块的重建,有效生成紧密模仿目标数据集中图像块分布的图像。此外,我们通过数值实验证明,与现有方法相比,Patch-MI将Top 1攻击准确率提高了5个百分点。