Membership inference attacks (MIAs) are becoming standard tools for auditing the privacy of machine learning models. The leading attacks -- LiRA (Carlini et al., 2022) and RMIA (Zarifzadeh et al., 2024) -- appear to use distinct scoring strategies, while the recently proposed BASE (Lassila et al., 2025) was shown to be equivalent to RMIA, making it difficult for practitioners to choose among them. We show that all three are instances of a single exponential-family log-likelihood ratio framework, differing only in their distributional assumptions and the number of parameters estimated per data point. This unification reveals a hierarchy (BASE1-4) that connects RMIA and LiRA as endpoints of a spectrum of increasing model complexity. Within this framework, we identify variance estimation as the key bottleneck at small shadow-model budgets and propose BaVarIA, a Bayesian variance inference attack that replaces threshold-based parameter switching with conjugate normal-inverse-gamma priors. BaVarIA yields a Student-t predictive (BaVarIA-t) or a Gaussian with stabilized variance (BaVarIA-n), providing stable performance without additional hyperparameter tuning. Across 12 datasets and 7 shadow-model budgets, BaVarIA matches or improves upon LiRA and RMIA, with the largest gains in the practically important low-shadow-model and offline regimes.
翻译:成员推断攻击正逐渐成为评估机器学习模型隐私性的标准工具。主流攻击方法——LiRA(Carlini等人,2022年)与RMIA(Zarifzadeh等人,2024年)——似乎采用了不同的评分策略,而近期提出的BASE方法(Lassila等人,2025年)被证明与RMIA等价,这使得实践者难以在它们之间进行选择。本文证明这三种方法均属于统一的指数族对数似然比框架,其差异仅在于分布假设及每个数据点估计的参数数量。这一统一性揭示了一个连接RMIA与LiRA的层次结构(BASE1-4),二者构成模型复杂度递增谱系的两个端点。在此框架中,我们识别出方差估计是小规模影子模型预算下的关键瓶颈,并提出BaVarIA——一种贝叶斯方差推断攻击,该方法采用共轭正态-逆伽马先验替代基于阈值的参数切换机制。BaVarIA可生成学生t分布预测(BaVarIA-t)或具有稳定方差的⾼斯分布预测(BaVarIA-n),在无需额外超参数调优的情况下提供稳定的性能表现。在12个数据集和7种影子模型预算的测试中,BaVarIA达到或超越了LiRA与RMIA的性能,并在实际重要的低影子模型与离线场景中取得最显著的性能提升。