We apply a state-of-the-art membership inference attack (MIA) to systematically test the practical privacy vulnerability of fine-tuning large image classification models. We focus on understanding the properties of data sets and samples that make them vulnerable to membership inference. In terms of data set properties, we find a strong power law dependence between the number of examples per class in the data and the MIA vulnerability, as measured by true positive rate of the attack at a low false positive rate. We train a linear model to predict true positive rate based on data set properties and observe good fit for MIA vulnerability on unseen data. To analyse the phenomenon theoretically, we reproduce the result on a simplified model of membership inference that behaves similarly to our experimental data. We prove that in this model, the logarithm of the difference of true and false positive rates depends linearly on the logarithm of the number of examples per class.For an individual sample, the gradient norm is predictive of its vulnerability.
翻译:我们应用最先进的成员推理攻击(MIA)系统性地测试微调大型图像分类模型的实际隐私脆弱性。我们重点探究导致数据集和样本易受成员推理攻击的特性。在数据集属性方面,我们发现数据中每类样本数量与MIA脆弱性之间存在强烈的幂律依赖关系,该脆弱性以低误报率下攻击的真阳性率衡量。我们训练了一个线性模型,基于数据集属性预测真阳性率,并观察到该模型对未见数据的MIA脆弱性具有良好的拟合效果。为从理论上分析该现象,我们在简化的成员推理模型上复现了该结果,该模型行为与实验数据相似。我们证明在此模型中,真阳性率与假阳性率之差的对数,与每类样本数量的对数呈线性关系。对于单个样本,其梯度范数可预测其脆弱性。