Membership inference (MI) attacks try to determine if a data sample was used to train a machine learning model. For foundation models trained on unknown Web data, MI attacks are often used to detect copyrighted training materials, measure test set contamination, or audit machine unlearning. Unfortunately, we find that evaluations of MI attacks for foundation models are flawed, because they sample members and non-members from different distributions. For 8 published MI evaluation datasets, we show that blind attacks -- that distinguish the member and non-member distributions without looking at any trained model -- outperform state-of-the-art MI attacks. Existing evaluations thus tell us nothing about membership leakage of a foundation model's training data.
翻译:成员推断攻击试图判断某个数据样本是否被用于训练机器学习模型。对于基于未知网络数据训练的基础模型,成员推断攻击常被用于检测受版权保护的训练材料、衡量测试集污染或审计机器遗忘。然而,我们发现针对基础模型的成员推断攻击评估存在缺陷,因为它们从不同分布中采样成员与非成员数据。在8个已发布的成员推断评估数据集上,我们证明盲攻击——即在不观察任何已训练模型的情况下区分成员与非成员分布——能够超越最先进的成员推断攻击。因此,现有评估结果无法反映基础模型训练数据的成员信息泄露情况。