Matrix-vector algorithms, particularly Krylov subspace methods, are widely viewed as the most effective algorithms for solving large systems of linear equations. This paper establishes lower bounds on the worst-case number of matrix-vector products needed by such an algorithm to approximately solve a general linear system. The first main result is that, for a matrix-vector algorithm which can perform products with both a matrix and its transpose, $Ω(κ\log(1/\varepsilon))$ matrix-vector products are necessary to solve a linear system with condition number $κ$ to accuracy $\varepsilon$, matching an upper bound for conjugate gradient on the normal equations. The second main result is that one-sided algorithms, which lack access to the transpose, must use $n$ matrix-vector products to solve an $n \times n$ linear system, even when the problem is perfectly conditioned. Both main results include explicit constants that match known upper bounds up to a factor of four. These results rigorously demonstrate the limitations of matrix-vector algorithms and confirm the optimality of widely used Krylov subspace algorithms.
翻译:矩阵-向量算法,特别是Krylov子空间方法,被广泛视为求解大规模线性方程组最有效的算法。本文建立了此类算法近似求解一般线性系统所需矩阵-向量乘积次数在最坏情况下的下界。第一个主要结果表明:对于能够同时执行矩阵及其转置乘积运算的矩阵-向量算法,求解条件数为$κ$、精度要求为$\varepsilon$的线性系统至少需要$Ω(κ\log(1/\varepsilon))$次矩阵-向量乘积,该结果与法方程共轭梯度法的上界相匹配。第二个主要结果表明:单侧算法(无法访问转置矩阵)必须使用$n$次矩阵-向量乘积才能求解$n \times n$线性系统,即使问题完全良态时亦然。两项主要结果均包含与已知上界至多相差四倍因子的显式常数。这些结果严格证明了矩阵-向量算法的局限性,并验证了广泛使用的Krylov子空间算法的最优性。