We propose an extension of Thompson sampling to optimization problems over function spaces where the objective is a known functional of an unknown operator's output. We assume that queries to the operator (such as running a high-fidelity simulator or physical experiment) are costly, while functional evaluations on the operator's output are inexpensive. Our algorithm employs a sample-then-optimize approach using neural operator surrogates. This strategy avoids explicit uncertainty quantification by treating trained neural operators as approximate samples from a Gaussian process (GP) posterior. We derive regret bounds and theoretical results connecting neural operators with GPs in infinite-dimensional settings. Experiments benchmark our method against other Bayesian optimization baselines on functional optimization tasks involving partial differential equations of physical systems, demonstrating better sample efficiency and significant performance gains.
翻译:本文提出了一种Thompson采样方法在函数空间优化问题中的扩展,该问题的目标函数是未知算子输出的已知泛函。我们假设对算子的查询(例如运行高保真度模拟器或物理实验)成本高昂,而对算子输出的泛函评估则成本较低。我们的算法采用基于神经算子代理模型的"采样-优化"方法。该策略通过将训练好的神经算子视为高斯过程后验分布的近似样本,避免了显式的不确定性量化。我们推导了遗憾界,并建立了无限维空间中神经算子与高斯过程关联的理论结果。实验在涉及物理系统偏微分方程的函数优化任务上,将本方法与其它贝叶斯优化基准方法进行对比,结果表明本方法具有更好的样本效率和显著的性能提升。