Model Inversion (MI) attacks aim to reconstruct private training data by abusing access to machine learning models. Contemporary MI attacks have achieved impressive attack performance, posing serious threats to privacy. Meanwhile, all existing MI defense methods rely on regularization that is in direct conflict with the training objective, resulting in noticeable degradation in model utility. In this work, we take a different perspective, and propose a novel and simple Transfer Learning-based Defense against Model Inversion (TL-DMI) to render MI-robust models. Particularly, by leveraging TL, we limit the number of layers encoding sensitive information from private training dataset, thereby degrading the performance of MI attack. We conduct an analysis using Fisher Information to justify our method. Our defense is remarkably simple to implement. Without bells and whistles, we show in extensive experiments that TL-DMI achieves state-of-the-art (SOTA) MI robustness. Our code, pre-trained models, demo and inverted data are available at: https://hosytuyen.github.io/projects/TL-DMI
翻译:模型反演攻击旨在通过滥用对机器学习模型的访问来重构私有训练数据。当代模型反演攻击已取得令人瞩目的攻击性能,对隐私构成严重威胁。与此同时,现有所有模型反演防御方法均依赖于与训练目标直接冲突的正则化技术,导致模型实用性显著下降。在本研究中,我们采取不同视角,提出一种新颖且简单的基于迁移学习的模型反演防御方法,以构建具有反演鲁棒性的模型。具体而言,通过利用迁移学习,我们限制了编码私有训练数据敏感信息的层数,从而降低模型反演攻击的性能。我们使用Fisher信息进行分析以论证方法的合理性。我们的防御方法实现起来极为简单。无需复杂技术,大量实验表明,TL-DMI达到了最先进的模型反演鲁棒性。相关代码、预训练模型、演示及反演数据可在以下链接获取:https://hosytuyen.github.io/projects/TL-DMI