The latency and energy of tensor algebra accelerators depend on how data movement and operations are scheduled (i.e., mapped) onto accelerators, so determining the potential of an accelerator architecture requires both a performance model and a mapper to search for the optimal mapping. A key optimization that the mapper must explore is fusion, meaning holding data on-chip between computation steps, which has been shown to reduce energy and latency by reducing DRAM accesses. However, prior mappers cannot find optimal mappings with fusion (i.e., fused mappings) in a feasible runtime because the number of fused mappings to search increases exponentially with the number of workload computation steps. In this paper, we introduce the Fast and Fusiest Mapper (FFM), the first mapper to quickly find optimal mappings in a comprehensive fused mapspace for tensor algebra workloads. FFM shrinks the search space by pruning subsets of mappings (i.e., partial mappings) that are shown to never be a part of optimal mappings, quickly eliminating all suboptimal mappings with those partial mappings as subsets. Then FFM joins partial mappings to construct optimal fused mappings. We evaluate FFM and show that, although the mapspace size grows exponentially with the number of computation steps, FFM's runtime scales approximately linearly. FFM is orders of magnitude faster ($>1000\times$) than prior state-of-the-art approaches at finding optimal mappings for Transformers.
翻译:张量代数加速器的延迟与能耗取决于数据移动与运算操作在加速器上的调度方式(即映射方式),因此评估加速器架构潜力既需要性能模型,也需要映射器来搜索最优映射。映射器必须探索的关键优化是融合(fusion),即在计算步骤之间将数据保留在片上,已有研究表明该技术可通过减少DRAM访问来降低能耗与延迟。然而,由于融合映射的搜索数量随工作负载计算步骤数呈指数级增长,现有映射器无法在可行时间内找到融合条件下的最优映射(即融合映射)。本文提出快速与融合最优映射器(FFM),这是首个能够为张量代数工作负载在完整融合映射空间中快速找到最优映射的映射器。FFM通过剪枝被证明不可能成为最优映射组成部分的映射子集(即部分映射),快速消除所有以这些部分映射为子集的最优映射,从而压缩搜索空间。随后FFM通过合并部分映射来构建最优融合映射。实验评估表明,尽管映射空间规模随计算步骤数呈指数增长,FFM的运行时间仍保持近似线性增长。在Transformer模型的最优映射搜索任务中,FFM较现有最优方法实现了数量级(>1000倍)的速度提升。