Predictive machine learning models are becoming increasingly deployed in high-stakes contexts involving sensitive personal data; in these contexts, there is a trade-off between model explainability and data privacy. In this work, we push the boundaries of this trade-off: with a focus on foundation models for image classification fine-tuning, we reveal unforeseen privacy risks of post-hoc model explanations and subsequently offer mitigation strategies for such risks. First, we construct VAR-LRT and L1/L2-LRT, two new membership inference attacks based on feature attribution explanations that are significantly more successful than existing explanation-leveraging attacks, particularly in the low false-positive rate regime that allows an adversary to identify specific training set members with confidence. Second, we find empirically that optimized differentially private fine-tuning substantially diminishes the success of the aforementioned attacks, while maintaining high model accuracy. We carry out a systematic empirical investigation of our 2 new attacks with 5 vision transformer architectures, 5 benchmark datasets, 4 state-of-the-art post-hoc explanation methods, and 4 privacy strength settings.
翻译:预测性机器学习模型正日益部署于涉及敏感个人数据的高风险场景中;在这些场景下,模型可解释性与数据隐私之间存在权衡。本工作中,我们突破了这一权衡的边界:聚焦于图像分类微调的基础模型,我们揭示了后验模型解释未预见到的隐私风险,并随后提出了针对此类风险的缓解策略。首先,我们构建了VAR-LRT和L1/L2-LRT两种基于特征归因解释的新型成员推断攻击,其成功率显著高于现有的利用解释的攻击方法,特别是在低误报率条件下,攻击者能够以高置信度识别特定的训练集成员。其次,我们通过实证发现,经过优化的差分隐私微调能大幅降低上述攻击的成功率,同时保持较高的模型准确度。我们使用5种视觉Transformer架构、5个基准数据集、4种先进的后验解释方法以及4种隐私强度设置,对提出的2种新型攻击进行了系统的实证研究。