Composite optimization problems involve minimizing the composition of a smooth map with a convex function. Such objectives arise in numerous data science and signal processing applications, including phase retrieval, blind deconvolution, and collaborative filtering. The subgradient method achieves local linear convergence when the composite loss is well-conditioned. However, if the smooth map is, in a certain sense, ill-conditioned or overparameterized, the subgradient method exhibits much slower sublinear convergence even when the convex function is well-conditioned. To overcome this limitation, we introduce a Levenberg-Morrison-Marquardt subgradient method that converges linearly under mild regularity conditions at a rate determined solely by the convex function. Further, we demonstrate that these regularity conditions hold for several problems of practical interest, including square-variable formulations, matrix sensing, and tensor factorization. Numerical experiments illustrate the benefits of our method.
翻译:复合优化问题涉及最小化光滑映射与凸函数的复合。此类目标函数广泛出现在数据科学和信号处理应用中,包括相位恢复、盲解卷积和协同过滤。当复合损失函数良态时,子梯度法可实现局部线性收敛。然而,若光滑映射在某种意义上是病态或过参数化的,即使凸函数良态,子梯度法也会表现出更慢的次线性收敛。为克服这一局限,我们提出一种Levenberg-Morrison-Marquardt子梯度法,该方法在温和的正则性条件下以仅由凸函数决定的速率线性收敛。此外,我们证明这些正则性条件适用于多个实际应用问题,包括平方变量公式化、矩阵感知和张量分解。数值实验验证了本方法的优势。