The preconditioned conjugate gradient (PCG) algorithm is one of the most popular algorithms for solving large-scale linear systems Ax = b, where A is a symmetric positive definite matrix. Rather than computing residuals directly, it updates the residual vectors recursively. Current analyses of the conjugate gradient (CG) algorithm in finite precision typically assume that the norm of the recursively updated residual goes orders of magnitude below the machine precision, focusing mainly on bounding the residual gap thereafter. This work introduces a framework for the PCG algorithm and provides rigorous proofs that the relative backward and forward errors of the computed results of PCG can reach the levels O(u) and O(u)\kappa(A)^{1/2}, respectively, after a sufficient number of iterations without relying on an assumption concerning the norm of the recursively updated residual, where u represents the unit roundoff and \kappa(A) is the condition number of A. Our PCG framework further shows that applying preconditioners in low precision does not compromise the accuracy of the final results, provided that reasonable conditions are satisfied. Our theoretical results are illustrated through a set of numerical experiments.
翻译:预处理共轭梯度(PCG)算法是求解大规模线性方程组 Ax = b 最流行的算法之一,其中 A 为对称正定矩阵。该算法不直接计算残差,而是以递归方式更新残差向量。当前有限精度下共轭梯度(CG)算法的分析通常假设递归更新残差的范数低于机器精度数个数量级,并主要关注此后残差间隙的界定。本文为 PCG 算法引入了一个分析框架,并给出了严格证明:在足够次数的迭代后,PCG 计算结果的相对后向误差与前向误差可分别达到 O(u) 与 O(u)κ(A)^{1/2} 的量级,且无需依赖关于递归更新残差范数的假设,其中 u 表示单位舍入误差,κ(A) 为 A 的条件数。我们的 PCG 框架进一步表明,在满足合理条件的前提下,以低精度施加预处理操作不会损害最终结果的精度。一系列数值实验验证了我们的理论结果。