We study the approximation of a square-integrable function from a finite number of evaluations on a random set of nodes according to a well-chosen distribution. This is particularly relevant when the function is assumed to belong to a reproducing kernel Hilbert space (RKHS). This work proposes to combine several natural finite-dimensional approximations based two possible probability distributions of nodes. These distributions are related to determinantal point processes, and use the kernel of the RKHS to favor RKHS-adapted regularity in the random design. While previous work on determinantal sampling relied on the RKHS norm, we prove mean-square guarantees in $L^2$ norm. We show that determinantal point processes and mixtures thereof can yield fast convergence rates. Our results also shed light on how the rate changes as more smoothness is assumed, a phenomenon known as superconvergence. Besides, determinantal sampling generalizes i.i.d. sampling from the Christoffel function which is standard in the literature. More importantly, determinantal sampling guarantees the so-called instance optimality property for a smaller number of function evaluations than i.i.d. sampling.
翻译:本研究探讨了在根据精心选择的分布生成的随机节点集上,通过有限次函数值求取平方可积函数近似的问题。当函数被假定属于再生核希尔伯特空间(RKHS)时,此问题尤为相关。本工作提出结合基于两种可能节点概率分布的几种自然有限维近似方法。这些分布与行列式点过程相关,并利用RKHS的核函数来促使随机设计适应RKHS所要求的正则性。以往关于行列式采样的研究依赖于RKHS范数,而本文证明了$L^2$范数下的均方误差保证。我们证明行列式点过程及其混合能够实现快速收敛速率。我们的结果也揭示了随着假设光滑性增强速率如何变化,这一现象被称为超收敛。此外,行列式采样推广了文献中标准的基于Christoffel函数的独立同分布采样。更重要的是,与独立同分布采样相比,行列式采样能以更少的函数求值次数保证所谓的实例最优性。