In recent years, novel AI accelerators have emerged as promising alternatives to GPU for AI model training and inference tasks. One such accelerator, the Cerebras CS-3, achieves strong performance on large model training as well as scientific applications like molecular dynamics simulations. While dense compute workloads have been thoroughly explored for the CS-3, its potential for sparse workloads has not been fully examined. Applications requiring sparse linear algebra kernels, such as GNNs, linear solvers, and recommendation systems, could achieve good performance on a dataflow accelerator like the CS-3. In this work, we explore two key sparse linear algebra kernels, sparse-dense matrix multiplication (SpMM) and sampled dense-dense matrix multiplication (SDDMM), on the Cerebras CS-3. We propose low-level CS-3 kernel designs for these operations and optimize our designs to improve I/O performance, memory footprint, and scalability to large matrices. Our evaluation examines memory footprint and SpMM/SDDMM speedup relative to CPU. The evaluation suggests that the CS-3 can outperform CPU by 100$\times$ for SpMM with 90\% sparse matrices with performance improving as sparse matrix dimensionality increases. SDDMM on CS-3 can outperform CPU 20$\times$ for 90\% sparse matrices. We additionally find that as sparsity increases to beyond 99\%, the CS-3 suffers from performance degradation that makes it slower than CPU for SpMM.
翻译:近年来,新型AI加速器作为GPU在AI模型训练与推理任务中的替代方案逐渐崭露头角。其中,Cerebras CS-3加速器在大规模模型训练及分子动力学模拟等科学应用中均展现出强劲性能。尽管CS-3的密集计算任务已得到充分研究,但其在稀疏计算场景中的潜力尚未被完全挖掘。需要稀疏线性代数核心的应用(如图神经网络GNNs、线性求解器与推荐系统)有望在CS-3这类数据流加速器上获得优异表现。本研究针对Cerebras CS-3上的两种关键稀疏线性代数核心——稀疏-稠密矩阵乘法(SpMM)与采样稠密-稠密矩阵乘法(SDDMM)展开探索。我们提出了这些运算的低层级CS-3核心设计方案,并通过优化设计改善了输入/输出性能、内存占用及面向大规模矩阵的可扩展性。实验评估重点关注内存占用以及SpMM/SDDMM相对于CPU的加速比。结果表明:对于含90%稀疏元素的矩阵,CS-3在SpMM运算中可达到CPU的100倍加速,且加速比随稀疏矩阵维度增加而提升;在90%稀疏矩阵的SDDMM运算中,CS-3可实现CPU 20倍的加速比。此外研究发现,当稀疏度超过99%时,CS-3会出现性能退化,导致SpMM运算速度低于CPU。