Cellular automata (CA) are simulation models that can produce complex emergent behaviors from simple local rules. Although state-of-the-art GPU solutions are already fast due to their data-parallel nature, their performance can rapidly degrade in CA with a large neighborhood radius. With the inclusion of tensor cores across the entire GPU ecosystem, interest has grown in finding ways to leverage these fast units outside the field of artificial intelligence, which was their original purpose. In this work, we present CAT, a GPU tensor core approach that can accelerate CA in which the cell transition function acts on a weighted summation of its neighborhood. CAT is evaluated theoretically, using an extended PRAM cost model, as well as empirically using the Larger Than Life (LTL) family of CA as case studies. The results confirm that the cost model is accurate, showing that CAT exhibits constant time throughout the entire radius range $1 \le r \le 16$, and its theoretical speedups agree with the empirical results. At low radius $r=1,2$, CAT is competitive and is only surpassed by the fastest state-of-the-art GPU solution. Starting from $r=3$, CAT progressively outperforms all other approaches, reaching speedups of up to $101\times$ over a GPU baseline and up to $\sim 14\times$ over the fastest state-of-the-art GPU approach. In terms of energy efficiency, CAT is competitive in the range $1 \le r \le 4$ and from $r \ge 5$ it is the most energy efficient approach. As for performance scaling across GPU architectures, CAT shows a promising trend that if continues for future generations, it would increase its performance at a higher rate than classical GPU solutions. The results obtained in this work put CAT as an attractive GPU approach for scientists that need to study emerging phenomena on CA with large neighborhood radius.
翻译:元胞自动机(CA)是一种能够通过简单的局部规则产生复杂涌现行为的仿真模型。尽管最先进的GPU解决方案因其数据并行特性已具备高速性能,但在具有大邻域半径的CA中,其性能可能迅速下降。随着张量核心被整合至整个GPU生态系统,人们对于在人工智能领域之外(其最初设计用途)利用这些快速计算单元的兴趣日益增长。本研究提出CAT——一种基于GPU张量核心的方法,可加速那些元胞状态转移函数作用于邻域加权求和的CA。我们通过扩展的PRAM成本模型对CAT进行理论评估,并以"大于生命"(LTL)系列CA作为案例进行实证研究。结果证实该成本模型具有准确性:CAT在半径范围$1 \le r \le 16$内呈现恒定时间复杂度,其理论加速比与实证结果一致。在较小半径($r=1,2$)条件下,CAT保持竞争力,仅略逊于最快的先进GPU解决方案。从$r=3$开始,CAT逐步超越所有其他方法,相较于GPU基准方案最高实现$101\times$加速比,相较于最快的先进GPU方案最高实现约$14\times$加速比。在能效方面,CAT在$1 \le r \le 4$范围内具有竞争力,当$r \ge 5$时成为最优能效方案。关于跨GPU架构的性能扩展性,CAT展现出积极趋势:若该趋势在未来GPU代际中持续,其性能提升速率将超越传统GPU解决方案。本研究结果表明,CAT为需要研究大邻域半径CA涌现现象的科研工作者提供了一种极具吸引力的GPU解决方案。