Generative foundation models have become an important tool for data reconstruction and simulation in scientific computing, showing a tight integration with traditional numerical simulations. At the same time, with the development of new hardware features, such as matrix acceleration units and high-bandwidth memory, CPU-based clusters offer promising opportunities to accelerate and scale such models, facilitating the unification of artificial intelligence and scientific computing. We present DiT-HC, the first system to train and scale the generative model DiT on a next-generation HPC CPU cluster. DiT-HC introduces three key techniques: (1) communication-free tensor parallelism (CFTP) with AutoMem for automated memory-aware dataflow, (2) HCOps, a suite of optimized GEMM and operator kernels leveraging vector and matrix acceleration units, and (3) a custom MPI backend that overlaps computation, communication, and memory movement. Experiments show 8.2 to 87.7 times speedups over native or public CPU libraries and 90.6% weak scaling efficiency on 256 nodes. These results demonstrate the feasibility of large-scale generative model training on CPU clusters and provide new insights for future HPC-AI co-design.
翻译:生成式基础模型已成为科学计算中数据重建与模拟的重要工具,显示出与传统数值模拟的紧密结合。与此同时,随着矩阵加速单元和高带宽内存等新型硬件特性的发展,基于CPU的集群为加速和扩展此类模型提供了广阔前景,促进了人工智能与科学计算的融合。我们提出了DiT-HC,这是首个在下一代高性能计算CPU集群上训练并扩展生成模型DiT的系统。DiT-HC引入了三项关键技术:(1) 结合AutoMem实现自动化内存感知数据流的无通信张量并行,(2) HCOps,一套利用向量和矩阵加速单元优化的通用矩阵乘法及算子内核,以及(3) 一个定制化的MPI后端,可重叠计算、通信与内存移动。实验表明,相较于原生或公开的CPU库,该系统实现了8.2至87.7倍的加速,并在256个节点上达到了90.6%的弱扩展效率。这些结果证明了在CPU集群上进行大规模生成模型训练的可行性,并为未来高性能计算与人工智能的协同设计提供了新的见解。