We consider the problem of propagating the uncertainty from a possibly large number of random inputs through a computationally expensive model. Stratified sampling is a well-known variance reduction strategy, but its application, thus far, has focused on models with a limited number of inputs due to the challenges of creating uniform partitions in high dimensions. To overcome these challenges, we propose a simple methodology for constructing an effective stratification of the input domain that is adapted to the model response. Our approach leverages neural active manifolds, a recently introduced nonlinear dimensionality reduction technique based on neural networks that identifies a one-dimensional manifold capturing most of the model variability. The resulting one-dimensional latent space is mapped to the unit interval, where stratification is performed with respect to the uniform distribution. The corresponding strata in the original input space are then recovered through the neural active manifold, generating partitions that tend to follow the level sets of the model. We show that our approach is effective in high dimensions and can be used to further reduce the variance of multifidelity Monte Carlo estimators.
翻译:本文研究如何将可能大量随机输入的不确定性传播至计算成本高昂的模型中。分层抽样是一种经典的方差缩减策略,但由于在高维空间构建均匀划分存在困难,其应用迄今主要局限于输入变量有限的模型。为突破这一限制,我们提出一种简洁的方法论,用于构建适应模型响应的输入域有效分层。该方法基于神经主动流形——一种近期提出的基于神经网络的非线性降维技术,能够识别出捕获模型主要变异特征的一维流形。由此得到的一维潜空间被映射至单位区间,并在该区间内针对均匀分布执行分层操作。原始输入空间中对应的分层通过神经主动流形进行重构,生成的划分能够趋近于模型的等高线集。我们证明该方法在高维情形下依然有效,并可进一步降低多保真度蒙特卡洛估计量的方差。