This paper analyzes the approximate control variate (ACV) approach to multifidelity uncertainty quantification in the case where weighted estimators are combined to form the components of the ACV. The weighted estimators enable one to precisely group models that share input samples to achieve improved variance reduction. We demonstrate that this viewpoint yields a generalized linear estimator that can assign any weight to any sample. This generalization shows that other linear estimators in the literature, particularly the multilevel best linear unbiased estimator (ML-BLUE) of Schaden and Ullman in 2020, becomes a specific version of the ACV estimator of Gorodetsky, Geraci, Jakeman, and Eldred, 2020. Moreover, this connection enables numerous extensions and insights. For example, we empirically show that having non-independent groups can yield better variance reduction compared to the independent groups used by ML-BLUE. Furthermore, we show that such grouped estimators can use arbitrary weighted estimators, not just the simple Monte Carlo estimators used in ML-BLUE. Furthermore, the analysis enables the derivation of ML-BLUE directly from a variance reduction perspective, rather than a regression perspective.
翻译:本文分析了在加权估计量组合形成近似控制变分(ACV)组件的情况下,多保真度不确定性量化中近似控制变分方法。加权估计量使得能够精确地对共享输入样本的模型进行分组,从而实现改进的方差缩减。我们证明这一视角可以生成一个广义线性估计量,该估计量可以为任意样本分配任意权重。这一泛化表明,文献中其他线性估计量,特别是Schaden与Ullman在2020年提出的多层级最佳线性无偏估计(ML-BLUE),成为Gorodetsky、Geraci、Jakeman与Eldred在2020年提出的ACV估计量的一个特定版本。此外,这种关联推动了多种扩展与洞见。例如,我们通过实验证明,与ML-BLUE使用的独立组相比,采用非独立组能够实现更好的方差缩减。同时,我们证明此类分组估计量可以使用任意加权估计量,而不仅限于ML-BLUE中使用的简单蒙特卡罗估计量。此外,该分析使得能够直接从方差缩减视角(而非回归视角)推导出ML-BLUE。