This work proposes a novel adaptive linearized alternating direction multiplier method (LADMM) to convex optimization, which improves the convergence rate of the LADMM-based algorithm by adjusting step-size iteratively.The innovation of this method is to utilize the information of the current iteration point to adaptively select the appropriate parameters, thus expanding the selection of the subproblem step size and improving the convergence rate of the algorithm while ensuring convergence.The advantage of this method is that it can improve the convergence rate of the algorithm as much as possible without compromising the convergence. This is very beneficial for the solution of optimization problems because the traditional linearized alternating direction multiplier method has a trade-off in the selection of the regular term coefficients: larger coefficients ensure convergence but tend to lead to small step sizes, while smaller coefficients allow for an increase in the iterative step size but tend to lead to the algorithm's non-convergence. This balance can be better handled by adaptively selecting the parameters, thus improving the efficiency of the algorithm.
翻译:本文提出了一种新颖的自适应线性化交替方向乘子法(LADMM)用于凸优化,该方法通过迭代调整步长,提升了基于LADMM算法的收敛速度。该方法的创新之处在于利用当前迭代点的信息自适应地选择合适参数,从而在保证收敛的前提下,扩展了子问题步长的选择范围,并提高了算法的收敛速率。此方法的优势在于,在不损害收敛性的前提下,尽可能提升算法的收敛速度。这对于优化问题的求解非常有益,因为传统的线性化交替方向乘子法在正则项系数的选择上存在权衡:较大的系数能保证收敛但往往导致步长较小,而较小的系数允许增大迭代步长却容易导致算法不收敛。通过自适应地选择参数,可以更好地处理这一平衡,从而提升算法效率。