Dynamic behaviors are becoming prevalent in tensor applications, like machine learning, where many widely used models contain data-dependent tensor shapes and control flow. However, the limited expressiveness of prior programming abstractions for spatial dataflow accelerators (SDAs) forces these dynamic behaviors to be implemented statically and/or unoptimized. To address these challenges, we present Streaming Tensor Programs (STeP), a streaming abstraction that enables dynamic tensor workloads to run efficiently on SDAs. STeP introduces flexible routing operators, an explicit memory hierarchy, and symbolic-shape semantics that expose dynamic data rates and tensor dimensions. These capabilities unlock new optimizations, like dynamic tiling, dynamic parallelization, and configuration time-multiplexing, that adapt SDA execution to dynamic behaviors while preserving dataflow efficiency. Using a cycle-approximate simulator on representative LLM layers and a full model with real-world traces, STeP enables: dynamic tiling that breaks the Pareto-optimal frontier from prior work, dynamic parallelization that improves latency by ~2.72x, and configuration time-multiplexing that increases compute utilization by ~2.64x over prior SDA abstractions and their implementations.
翻译:动态行为在张量应用中日益普遍,例如机器学习领域,许多广泛使用的模型包含数据依赖的张量形状和控制流。然而,现有面向空间数据流加速器的编程抽象表达能力有限,迫使这些动态行为以静态方式实现且/或未经优化。为应对这些挑战,我们提出了流式张量程序,这是一种流式抽象,能使动态张量工作负载在空间数据流加速器上高效运行。该框架引入了灵活的路由算子、显式内存层次结构以及符号形状语义,从而暴露动态数据速率和张量维度。这些特性解锁了新的优化技术,如动态分块、动态并行化及配置时分复用,使空间数据流加速器执行能适应动态行为,同时保持数据流效率。通过在典型大语言模型层和具有真实世界轨迹的完整模型上使用周期近似模拟器,该框架实现了:动态分块技术突破了先前工作的帕累托最优边界,动态并行化将延迟降低约2.72倍,配置时分复用较先前的空间数据流加速器抽象及其实现将计算利用率提升约2.64倍。