In a membership inference attack (MIA), an attacker exploits the overconfidence exhibited by typical machine learning models to determine whether a specific data point was used to train a target model. In this paper, we analyze the performance of the likelihood ratio attack (LiRA) within an information-theoretical framework that allows the investigation of the impact of the aleatoric uncertainty in the true data generation process, of the epistemic uncertainty caused by a limited training data set, and of the calibration level of the target model. We compare three different settings, in which the attacker receives decreasingly informative feedback from the target model: confidence vector (CV) disclosure, in which the output probability vector is released; true label confidence (TLC) disclosure, in which only the probability assigned to the true label is made available by the model; and decision set (DS) disclosure, in which an adaptive prediction set is produced as in conformal prediction. We derive bounds on the advantage of an MIA adversary with the aim of offering insights into the impact of uncertainty and calibration on the effectiveness of MIAs. Simulation results demonstrate that the derived analytical bounds predict well the effectiveness of MIAs.
翻译:在成员推断攻击中,攻击者利用典型机器学习模型表现出的过度自信,判断特定数据点是否被用于训练目标模型。本文在信息理论框架下分析似然比攻击的性能,该框架允许研究真实数据生成过程中的偶然不确定性、有限训练数据集引起的认知不确定性以及目标模型校准水平对攻击效果的影响。我们比较了三种不同设置,其中攻击者从目标模型获得的信息反馈逐级递减:置信向量披露(模型输出完整概率向量)、真实标签置信披露(模型仅提供其赋予真实标签的概率)以及决策集披露(如符合预测中产生自适应预测集)。我们推导了成员推断攻击者优势的理论界,旨在揭示不确定性与校准对攻击有效性的影响机制。仿真结果表明,所得解析界能够准确预测成员推断攻击的实际效果。