Dynamic shape computations have become critical in modern machine learning workloads, especially in emerging large language models. The success of these models has driven the demand for their universal deployment across a diverse set of backend environments. In this paper, we present Relax, a compiler abstraction for optimizing end-to-end dynamic machine learning workloads. Relax introduces a cross-level abstraction that encapsulates computational graphs, loop-level tensor programs, and external library calls in a single representation. Relax also introduces first-class symbolic shape annotations to track dynamic shape computations globally across the program, enabling dynamic shape-aware cross-level optimizations. We build an end-to-end compilation framework using the proposed approach to optimize dynamic shape models. Experimental results on LLMs show that Relax delivers performance competitive with state-of-the-art systems across various GPUs and enables deployment of emerging models to a broader set of emerging environments, including mobile phones, embedded devices, and web browsers.
翻译:动态形状计算在现代机器学习工作负载中变得至关重要,尤其是在新兴的大型语言模型中。这些模型的成功推动了对它们在多样化后端环境中实现通用部署的需求。本文提出Relax,一种用于优化端到端动态机器学习工作负载的编译器抽象。Relax引入了一种跨层级抽象,将计算图、循环级张量程序和外部库调用封装在单一表示中。Relax还引入了一类符号形状标注,用于在程序全局范围内跟踪动态形状计算,从而实现动态形状感知的跨层级优化。我们基于所提出的方法构建了一个端到端编译框架,以优化动态形状模型。在大型语言模型上的实验结果表明,Relax在各种GPU上实现了与最先进系统相竞争的性能,并能够将新兴模型部署到更广泛的新兴环境中,包括移动电话、嵌入式设备和网页浏览器。