We consider the problem of approximating an unknown function in a nonlinear model class from point evaluations. When obtaining these point evaluations is costly, minimising the required sample size becomes crucial. Recently, an increasing focus has been on employing adaptive sampling strategies to achieve this. These strategies are based on linear spaces related to the nonlinear model class, for which the optimal sampling measures are known. However, the resulting optimal sampling measures depend on an orthonormal basis of the linear space, which is known rarely. Consequently, sampling from these measures is challenging in practice. This manuscript presents a sampling strategy that iteratively refines an estimate of the optimal sampling measure by updating it based on previously drawn samples. This strategy can be performed offline and does not require evaluations of the sought function. We establish convergence and illustrate the practical performance through numerical experiments. Comparing the presented approach with standard Monte Carlo sampling demonstrates a significant reduction in the number of samples required to achieve a good estimation of an orthonormal basis.
翻译:本文研究从点值评估中逼近非线性模型类中未知函数的问题。当获取这些点值评估成本较高时,最小化所需样本量变得至关重要。近年来,采用自适应采样策略实现这一目标受到越来越多的关注。这些策略基于与非线性模型类相关的线性空间,其最优采样测度已知。然而,所得最优采样测度依赖于线性空间的正交基,而此类正交基通常未知,导致实践中难以从这些测度进行采样。本论文提出一种采样策略,通过基于先前抽取样本的更新迭代优化最优采样测度的估计。该策略可离线执行,且无需对目标函数进行评估。我们建立了收敛性理论,并通过数值实验展示了实际性能。将所提方法与标准蒙特卡洛采样进行比较,结果表明在实现正交基良好估计所需样本量方面取得了显著降低。