If the conclusion of a data analysis is sensitive to dropping very few data points, that conclusion might hinge on the particular data at hand rather than representing a more broadly applicable truth. How could we check whether this sensitivity holds? One idea is to consider every small subset of data, drop it from the dataset, and re-run our analysis. But running MCMC to approximate a Bayesian posterior is already very expensive; running multiple times is prohibitive, and the number of re-runs needed here is combinatorially large. Recent work proposes a fast and accurate approximation to find the worst-case dropped data subset, but that work was developed for problems based on estimating equations -- and does not directly handle Bayesian posterior approximations using MCMC. We make two principal contributions in the present work. We adapt the existing data-dropping approximation to estimators computed via MCMC. Observing that Monte Carlo errors induce variability in the approximation, we use a variant of the bootstrap to quantify this uncertainty. We demonstrate how to use our approximation in practice to determine whether there is non-robustness in a problem. Empirically, our method is accurate in simple models, such as linear regression. In models with complicated structure, such as hierarchical models, the performance of our method is mixed.
翻译:如果数据分析的结论对移除极少量数据点表现出敏感性,那么该结论可能仅依赖于当前特定数据,而非代表更广泛适用的真实情况。我们应如何检验这种敏感性是否成立?一种思路是考虑所有小型数据子集,将其从数据集中移除并重新运行分析。然而,通过MCMC近似贝叶斯后验分布本身计算成本极高;多次运行分析难以实现,而此处所需的重运行次数具有组合爆炸性。近期研究提出了一种快速准确的近似方法,用于寻找最差情况下的数据移除子集,但该方法仅适用于基于估计方程的问题——无法直接处理使用MCMC的贝叶斯后验近似。本研究作出两项主要贡献:首先,我们将现有数据移除近似方法适配至通过MCMC计算的估计量;其次,针对蒙特卡洛误差在近似过程中引入的变异性,我们采用自助法变体来量化这种不确定性。我们通过实例演示如何运用该近似方法判断问题中是否存在非稳健性。实证研究表明,在线性回归等简单模型中,本方法具有较高准确性;而在层次模型等结构复杂的模型中,本方法的性能表现存在波动。