This paper studies the parametric bootstrap method for networks to quantify the uncertainty of statistics of interest. While existing network resampling methods primarily focus on count statistics under node-exchangeable (graphon) models, we consider more general network statistics (including local statistics) under the Chung-Lu model without node-exchangeability. We show that the natural network parametric bootstrap that first estimates the network generating model and then draws bootstrap samples from the estimated model generally suffers from bootstrap bias. As a general recipe for addressing this problem, we show that a two-level bootstrap procedure provably reduces the bias. This essentially extends the classical idea of iterative bootstrap to the network setting with a growing number of parameters. Moreover, the second-level bootstrap provides a way to construct higher-accuracy confidence intervals for many network statistics.
翻译:摘要:本文研究网络中的参数自助法,以量化感兴趣统计量的不确定性。现有网络重抽样方法主要关注节点可交换(图论模型)模型下的计数统计量,而我们在不假设节点可交换性的Chung-Lu模型下考虑更一般的网络统计量(包括局部统计量)。研究表明,先估计网络生成模型、再从估计模型中抽取自助样本的自然网络参数自助法通常存在自助偏差。针对该问题,我们提出一种通用解决方案:证明两阶段自助法能够可证明地降低偏差。这实质上是将经典迭代自助法思想推广至参数数量递增的网络场景。此外,第二层自助法可为众多网络统计量构建更高精度的置信区间。