Optimization constrained by high-fidelity computational models has potential for transformative impact. However, such optimization is frequently unattainable in practice due to the complexity and computational intensity of the model. An alternative is to optimize a low-fidelity model and use limited evaluations of the high-fidelity model to assess the quality of the solution. This article develops a framework to use limited high-fidelity simulations to update the optimization solution computed using the low-fidelity model. Building off a previous article [22], which introduced hyper-differential sensitivity analysis with respect to model discrepancy, this article provides novel extensions of the algorithm to enable uncertainty quantification of the optimal solution update via a Bayesian framework. Specifically, we formulate a Bayesian inverse problem to estimate the model discrepancy and propagate the posterior model discrepancy distribution through the post-optimality sensitivity operator for the low-fidelity optimization problem. We provide a rigorous treatment of the Bayesian formulation, a computationally efficient algorithm to compute posterior samples, a guide to specify and interpret the algorithm hyper-parameters, and a demonstration of the approach on three examples which highlight various types of discrepancy between low and high-fidelity models.
翻译:受高保真计算模型约束的优化具有变革性影响的潜力。然而,由于模型的复杂性和计算强度,此类优化在实践中常常难以实现。一种替代方案是优化低保真度模型,并利用有限的高保真度模型评估来检验解的质量。本文开发了一个框架,利用有限的高保真度模拟来更新使用低保真度模型计算得到的优化解。基于先前文章[22]提出的关于模型差异的超微分灵敏度分析,本文提供了该算法的新颖扩展,通过贝叶斯框架实现对最优解更新的不确定性量化。具体而言,我们构建了一个贝叶斯反问题来估计模型差异,并将后验模型差异分布通过低保真度优化问题的后最优性灵敏度算子进行传播。我们对贝叶斯公式进行了严格处理,提出了一种计算后验样本的高效算法,提供了指定和解释算法超参数的指南,并通过三个示例展示了该方法,这些示例突显了低保真度与高保真度模型之间各种类型的差异。