We consider the estimation of some parameter $\mathbf{x}$ living in a cone from the nonlinear observations of the form $\{y_i=f_i(\langle\mathbf{a}_i,\mathbf{x}\rangle)\}_{i=1}^m$. We develop a unified approach that first constructs a gradient from the data and then establishes the restricted approximate invertibility condition (RAIC), a condition that quantifies how well the gradient aligns with the ideal descent step. We show that RAIC yields linear convergence guarantees for the standard projected gradient descent algorithm, a Riemannian gradient descent algorithm for low Tucker-rank tensor estimation, and a factorized gradient descent algorithm for asymmetric low-rank matrix estimation. Under Gaussian designs, we establish sharp RAIC for the canonical statistical estimation problems of single index models, generalized linear models, noisy phase retrieval, and one-bit compressed sensing. Combining the convergence guarantees and the RAIC, we obtain a set of optimal statistical estimation results, including, to our knowledge, the first minimax-optimal and computationally efficient algorithms for tensor single index models, tensor logistic regression, (local) noisy tensor phase retrieval, and one-bit tensor sensing. Moreover, several other results are new or match the best known guarantees. We also provide simulations and a real-data experiment to illustrate the theoretical results.
翻译:我们考虑从形式为 $\{y_i=f_i(\langle\mathbf{a}_i,\mathbf{x}\rangle)\}_{i=1}^m$ 的非线性观测中估计位于锥体中的参数 $\mathbf{x}$。我们提出了一种统一方法,首先从数据构造梯度,然后建立受限近似可逆性条件(RAIC),该条件量化了梯度与理想下降步的对齐程度。我们证明 RAIC 为标准投影梯度下降算法、用于低 Tucker 秩张量估计的黎曼梯度下降算法,以及用于非对称低秩矩阵估计的因子化梯度下降算法提供了线性收敛保证。在高斯设计下,我们为单指标模型、广义线性模型、含噪相位恢复和一比特压缩感知等经典统计估计问题建立了精确的 RAIC。结合收敛保证与 RAIC,我们获得了一系列最优统计估计结果,包括(据我们所知)针对张量单指标模型、张量逻辑回归、(局部)含噪张量相位恢复和一比特张量感知的首个极小极大最优且计算高效的算法。此外,若干其他结果属于首次提出或与已知最佳保证相匹配。我们还通过仿真和真实数据实验验证了理论结果。