Model inversion attacks pose a significant privacy threat to machine learning models by reconstructing sensitive data from their outputs. While various defenses have been proposed to counteract these attacks, they often come at the cost of the classifier's utility, thus creating a challenging trade-off between privacy protection and model utility. Moreover, most existing defenses require retraining the classifier for enhanced robustness, which is impractical for large-scale, well-established models. This paper introduces a novel defense mechanism to better balance privacy and utility, particularly against adversaries who employ a machine learning model (i.e., inversion model) to reconstruct private data. Drawing inspiration from data poisoning attacks, which can compromise the performance of machine learning models, we propose a strategy that leverages data poisoning to contaminate the training data of inversion models, thereby preventing model inversion attacks. Two defense methods are presented. The first, termed label-preserving poisoning attacks for all output vectors (LPA), involves subtle perturbations to all output vectors while preserving their labels. Our findings demonstrate that these minor perturbations, introduced through a data poisoning approach, significantly increase the difficulty of data reconstruction without compromising the utility of the classifier. Subsequently, we introduce a second method, label-flipping poisoning for partial output vectors (LFP), which selectively perturbs a small subset of output vectors and alters their labels during the process. Empirical results indicate that LPA is notably effective, outperforming the current state-of-the-art defenses. Our data poisoning-based defense provides a new retraining-free defense paradigm that preserves the victim classifier's utility.
翻译:模型逆向攻击通过从机器学习模型的输出中重构敏感数据,对其隐私构成了严重威胁。尽管已有多种防御方法被提出以应对此类攻击,但它们通常以牺牲分类器性能为代价,从而在隐私保护与模型效用之间形成了难以权衡的挑战。此外,现有防御大多需要重新训练分类器以增强鲁棒性,这对于大规模成熟模型而言并不现实。本文提出一种新颖的防御机制,旨在更好地平衡隐私与效用,尤其针对那些利用机器学习模型(即逆向模型)重构私有数据的攻击者。受数据投毒攻击(可损害机器学习模型性能)的启发,我们提出一种策略,利用数据投毒污染逆向模型的训练数据,从而阻止模型逆向攻击。本文提出了两种防御方法。第一种称为面向所有输出向量的标签保持型投毒攻击(LPA),该方法对所有输出向量施加微小的扰动,同时保持其标签不变。我们的研究结果表明,通过数据投毒引入的这些细微扰动能显著增加数据重构的难度,且不影响分类器的效用。随后,我们提出第二种方法,即面向部分输出向量的标签翻转型投毒(LFP),该方法选择性地扰动一小部分输出向量,并在此过程中改变其标签。实验结果表明,LPA方法效果显著,优于当前最先进的防御方法。我们基于数据投毒的防御提供了一种无需重新训练的新防御范式,同时保持了受害分类器的性能。