We develop a multilevel Monte Carlo (MLMC) framework for uncertainty quantification with Monte Carlo dropout. Treating dropout masks as a source of epistemic randomness, we define a fidelity hierarchy by the number of stochastic forward passes used to estimate predictive moments. We construct coupled coarse--fine estimators by reusing dropout masks across fidelities, yielding telescoping MLMC estimators for both predictive means and predictive variances that remain unbiased for the corresponding dropout-induced quantities while reducing sampling variance at fixed evaluation budget. We derive explicit bias, variance and effective cost expressions, together with sample-allocation rules across levels. Numerical experiments on forward and inverse PINNs--Uzawa benchmarks confirm the predicted variance rates and demonstrate efficiency gains over single-level MC-dropout at matched cost.
翻译:本文提出了一种基于蒙特卡洛Dropout进行不确定性量化的多级蒙特卡洛框架。通过将Dropout掩码视为认知随机性的来源,我们依据用于估计预测矩的随机前向传播次数定义了一个保真度层级结构。通过在不同保真度层级间复用Dropout掩码,我们构建了耦合的粗-细粒度估计器,从而为预测均值和预测方差分别构造了可伸缩的多级蒙特卡洛估计器。这些估计器在保持对相应Dropout诱导量的无偏性同时,能够在固定计算预算下降低采样方差。我们推导了显式的偏差、方差及有效计算成本表达式,并建立了跨层级的样本分配规则。在前向与逆向物理信息神经网络--Uzawa基准测试上的数值实验验证了理论预测的方差收敛速率,并证明在相同计算成本下,本方法相较于单级蒙特卡洛Dropout具有显著的效率提升。