Regressing a function $F$ on $\mathbb{R}^d$ without the statistical and computational curse of dimensionality requires special statistical models, for example that impose geometric assumptions on the distribution of the data (e.g., that its support is low-dimensional), or strong smoothness assumptions on $F$, or a special structure $F$. Among the latter, compositional models $F=f\circ g$ with $g$ mapping to $\mathbb{R}^r$ with $r\ll d$ include classical single- and multi-index models, as well as neural networks. While the case where $g$ is linear is well-understood, less is known when $g$ is nonlinear, and in particular for which $g$'s the curse of dimensionality in estimating $F$, or both $f$ and $g$, may be circumvented. Here we consider a model $F(X):=f(Π_γX)$ where $Π_γ:\mathbb{R}^d\to[0,\textrm{len}_γ]$ is the closest-point projection onto the parameter of a regular curve $γ:[0, \textrm{len}_γ]\to\mathbb{R}^d$, and $f:[0,\textrm{len}_γ]\to \mathbb{R}^1$. The input data $X$ is not low-dimensional: it can be as far from $γ$ as the condition that $Π_γ(X)$ is well-defined allows. The distribution $X$, the curve $γ$ and the function $f$ are all unknown. This model is a natural nonlinear generalization of the single-index model, corresponding to $γ$ being a line. We propose a nonparametric estimator, based on conditional regression, that under suitable assumptions, the strongest of which being that $f$ is coarsely monotone, achieves, up to log factors, the $\textit{one-dimensional}$ optimal min-max rate for non-parametric regression, up to the level of noise in the observations, and be constructed in time $\mathcal{O}(d^2 n\log n)$. All the constants in the learning bounds, in the minimal number of samples required for our bounds to hold, and in the computational complexity are at most low-order polynomials in $d$.
翻译:在避免维数灾难(统计与计算层面)的前提下对定义在 $\mathbb{R}^d$ 上的函数 $F$ 进行回归,需要特殊的统计模型。例如,这些模型可能对数据分布施加几何假设(如其支撑集是低维的),或对 $F$ 施加强光滑性假设,或要求 $F$ 具有特殊结构。在后一类模型中,复合模型 $F=f\circ g$(其中 $g$ 映射到 $\mathbb{R}^r$,且 $r\ll d$)包含了经典的单指标与多指标模型,以及神经网络。当 $g$ 为线性时,情况已较为明确;但当 $g$ 为非线性时,尤其是对于哪些 $g$ 可以规避估计 $F$ 或同时估计 $f$ 和 $g$ 时的维数灾难,目前所知较少。本文考虑模型 $F(X):=f(Π_γX)$,其中 $Π_γ:\mathbb{R}^d\to[0,\textrm{len}_γ]$ 是到正则曲线 $γ:[0, \textrm{len}_γ]\to\mathbb{R}^d$ 参数上的最近点投影,且 $f:[0,\textrm{len}_γ]\to \mathbb{R}^1$。输入数据 $X$ 并非低维:只要 $Π_γ(X)$ 有定义的条件允许,$X$ 可以距离 $γ$ 任意远。$X$ 的分布、曲线 $γ$ 以及函数 $f$ 均未知。该模型是单指标模型(对应 $γ$ 为直线情形)的自然非线性推广。我们提出一种基于条件回归的非参数估计器,在适当的假设下(其中最强的假设是 $f$ 具有粗单调性),该估计器能够达到(在对数因子意义下)非参数回归的 $\textit{一维}$ 最优极小极大速率(直至观测噪声水平),并且可在 $\mathcal{O}(d^2 n\log n)$ 时间内构建。学习界中的所有常数、保证界成立所需的最小样本量以及计算复杂度中的常数,均至多为 $d$ 的低阶多项式。