In this paper, we consider a class of non-convex and non-smooth sparse optimization problems, which encompass most existing nonconvex sparsity-inducing terms. We show the second-order optimality conditions only depend on the nonzeros of the stationary points. We propose two damped iterative reweighted algorithms including the iteratively reweighted $\ell_1$ algorithm (DIRL$_1$) and the iteratively reweighted $\ell_2$ (DIRL$_2$) algorithm, to solve these problems. For DIRL$_1$, we show the reweighted $\ell_1$ subproblem has support identification property so that DIRL$_1$ locally reverts to a gradient descent algorithm around a stationary point. For DIRL$_2$, we show the solution map of the reweighted $\ell_2$ subproblem is differentiable and Lipschitz continuous everywhere. Therefore, the map of DIRL$_1$ and DIRL$_2$ and their inverse are Lipschitz continuous, and the strict saddle points are their unstable fixed points. By applying the stable manifold theorem, these algorithms are shown to converge only to local minimizers with randomly initialization when the strictly saddle point property is assumed.
翻译:本文研究一类非凸非光滑稀疏优化问题,该类问题涵盖了现有大多数非凸稀疏诱导项。我们证明二阶最优性条件仅取决于驻点的非零分量。针对此类问题,我们提出了两种阻尼迭代重加权算法:迭代重加权ℓ₁算法(DIRL₁)与迭代重加权ℓ₂算法(DIRL₂)。对于DIRL₁,我们证明重加权ℓ₁子问题具有支撑集识别特性,使得DIRL₁在驻点附近局部退化为梯度下降算法。对于DIRL₂,我们证明重加权ℓ₂子问题的解映射处处可微且Lipschitz连续。因此,DIRL₁与DIRL₂的映射及其逆映射均具有Lipschitz连续性,且严格鞍点构成其不稳定不动点。通过应用稳定流形定理,在假设严格鞍点性质成立的前提下,这些算法在随机初始化条件下被证明仅收敛于局部极小点。