Robust subspace estimation is fundamental to many machine learning and data analysis tasks. Iteratively Reweighted Least Squares (IRLS) is an elegant and empirically effective approach to this problem, yet its theoretical properties remain poorly understood. This paper establishes that, under deterministic conditions, a variant of IRLS with dynamic smoothing regularization converges linearly to the underlying subspace from any initialization. We extend these guarantees to affine subspace estimation, a setting that lacks prior recovery theory. Additionally, we illustrate the practical benefits of IRLS through an application to low-dimensional neural network training. Our results provide the first global convergence guarantees for IRLS in robust subspace recovery and, more broadly, for nonconvex IRLS on a Riemannian manifold.
翻译:鲁棒子空间估计是许多机器学习和数据分析任务的基础。迭代重加权最小二乘法(IRLS)是解决该问题的一种简洁且经验上有效的方法,但其理论性质仍鲜为人知。本文证明,在确定性条件下,采用动态平滑正则化的IRLS变体可从任意初始化点线性收敛至潜在子空间。我们将这些保证推广至仿射子空间估计——这一先前缺乏恢复理论的设定。此外,我们通过低维神经网络训练的应用实例阐明了IRLS的实际优势。我们的研究首次为鲁棒子空间恢复中的IRLS提供了全局收敛性保证,并更广泛地拓展了黎曼流形上非凸IRLS的理论基础。