Adaptive cubic regularization methods for solving nonconvex problems need the efficient computation of the trial step, involving the minimization of a cubic model. We propose a new approach in which this model is minimized in a low dimensional subspace that, in contrast to classic approaches, is reused for a number of iterations. Whenever the trial step produced by the low-dimensional minimization process is unsatisfactory, we employ a regularized Newton step whose regularization parameter is a by-product of the model minimization over the low-dimensional subspace. We show that the worst-case complexity of classic cubic regularized methods is preserved, despite the possible regularized Newton steps. We focus on the large class of problems for which (sparse) direct linear system solvers are available and provide several experimental results showing the very large gains of our new approach when compared to standard implementations of adaptive cubic regularization methods based on direct linear solvers. Our first choice as projection space for the low-dimensional model minimization is the polynomial Krylov subspace; nonetheless, we also explore the use of rational Krylov subspaces in case where the polynomial ones lead to less competitive numerical results.
翻译:用于求解非凸问题的自适应三次正则化方法需要高效计算试探步,这涉及三次模型的最小化。我们提出一种新方法,该方法在低维子空间中最小化该模型,与经典方法不同,该子空间可被多次迭代重复使用。当低维最小化过程产生的试探步不满足要求时,我们采用正则化牛顿步,其正则化参数是低维子空间上模型最小化的副产品。我们证明,尽管可能存在正则化牛顿步,经典三次正则化方法的最坏情况复杂度仍得以保持。我们重点关注(稀疏)直接线性系统求解器可用的大类问题,并提供若干实验结果,表明与基于直接线性求解器的自适应三次正则化方法的标准实现相比,我们的新方法能带来极大的性能提升。作为低维模型最小化的投影空间,我们的首选是多项式Krylov子空间;尽管如此,当多项式子空间导致数值结果竞争力不足时,我们也探索使用有理Krylov子空间的可能性。