This paper studies the parametric bootstrap method for networks to quantify the uncertainty of statistics of interest. While existing network resampling methods primarily focus on count statistics under node-exchangeable (graphon) models, we consider more general network statistics (including local statistics) under the Chung-Lu model without node-exchangeability. We show that the natural network parametric bootstrap that first estimates the network generating model and then draws bootstrap samples from the estimated model generally suffers from bootstrap bias. As a general recipe for addressing this problem, we show that a two-level bootstrap procedure provably reduces the bias. This essentially extends the classical idea of iterative bootstrap to the network case with growing number of parameters. Moreover, for many network statistics, the second-level bootstrap also provides a way to construct confidence intervals with higher accuracy. As a byproduct of our effort to construct confidence intervals, we also prove the asymptotic normality of subgraph counts under the Chung-Lu model.
翻译:本文研究网络中参数自助法在量化目标统计量不确定性中的应用。现有网络重抽样方法主要关注节点可交换(图on)模型下的计数统计量,而我们考虑在非节点可交换的Chung-Lu模型框架下处理更广义的网络统计量(包括局部统计量)。我们证明,先估计网络生成模型、再从估计模型中抽取自助样本的自然网络参数自助法通常存在自助偏差。为解决该问题,我们提出一种通用方案:双层自助程序可证明地减少偏差,实质上将经典迭代自助思想延伸至参数规模递增的网络场景。此外,对于许多网络统计量,第二层自助法还可构造更高精度的置信区间。作为构建置信区间的副产品,我们还证明了Chung-Lu模型下子图计数的渐近正态性。