Benefiting from the advancement of hardware accelerators such as GPUs, deep neural networks and scientific computing applications can achieve superior performance. Recently, the computing capacity of emerging hardware accelerators has increased rapidly, while memory bandwidth has not kept pace with this growth. This disparity exacerbates the gap between computing and memory, leading to inefficiencies on conventional algorithms, as they're likely to be converted from compute-bound to memory-bound. Symmetric eigenvalue decomposition (EVD), a critical operation in various research domains including scientific computing, deep learning training, and inference algorithms, exhibits suboptimal performance due to achieving less than 3\% hardware computing utilization on the H100 GPU. In this paper, we analyze the features of emerging hardware accelerators to identify the bottlenecks inherent in conventional EVD algorithms. To improve EVD performance, we propose several algorithmic optimizations aimed at solving the memory-bound problem and providing a better utilization of the rich computing capacity and parallelism on the emerging hardware accelerators. Experimentally, our proposed method demonstrates significant speedups on tridiagonalization, which is the main workload that takes over 90\% elapsed time of EVD, compared to the SOTA cuSOLVER tridiagonalization, achieving up to 10.1x, 7.5x, and 2.3x improvements on H100, A100, and RTX 4090 GPUs, respectively. And the end-to-end the performance of EVD solver is also up to 4.1x faster than cuSOVLER.
翻译:受益于GPU等硬件加速器的发展,深度神经网络与科学计算应用得以实现卓越性能。近年来,新兴硬件加速器的计算能力快速增长,而内存带宽未能同步提升。这种差距加剧了计算与内存之间的鸿沟,导致传统算法效率低下——它们很可能从计算密集型转变为内存密集型。对称特征值分解作为科学计算、深度学习训练与推理算法等多个研究领域的关键运算,在H100 GPU上因硬件计算利用率不足3%而表现出次优性能。本文通过分析新兴硬件加速器的特性,揭示了传统EVD算法固有的瓶颈。为提升EVD性能,我们提出多项算法优化方案,旨在解决内存瓶颈问题,并更充分地利用新兴硬件加速器丰富的计算能力与并行性。实验表明,在占EVD耗时90%以上的核心计算阶段——三对角化过程中,我们提出的方法相较于当前最先进的cuSOLVER三对角化实现,在H100、A100和RTX 4090 GPU上分别实现了最高10.1倍、7.5倍和2.3倍的加速。端到端EVD求解器的整体性能也比cuSOLVER最高提升4.1倍。