Sparse matrices have recently played a significant and impactful role in scientific computing, including artificial intelligence-related fields. According to historical studies on sparse matrix--vector multiplication (SpMV), Krylov subspace methods are particularly sensitive to the effects of round-off errors when using floating-point arithmetic. By employing multiple-precision linear computation, convergence can be stabilized by reducing these round-off errors. In this paper, we present the performance of our accelerated SpMV using SIMD instructions, demonstrating its effectiveness through various examples, including Krylov subspace methods.
翻译:稀疏矩阵近年来在科学计算(包括人工智能相关领域)中发挥着重要且具有影响力的作用。根据对稀疏矩阵-向量乘法(SpMV)的历史研究,在使用浮点运算时,Krylov子空间方法对舍入误差的影响尤为敏感。通过采用多精度线性计算,可以通过减少这些舍入误差来稳定收敛性。在本文中,我们展示了使用SIMD指令加速的SpMV的性能,并通过包括Krylov子空间方法在内的各种示例证明了其有效性。