As the development of large-scale Generative AI models evolve beyond text (1D) generation to include image (2D) and video (3D) generation, processing spatial and temporal information presents unique challenges to quality, performance, and efficiency. We present the first work towards understanding this new system design space for multi-modal text-to-image (TTI) and text-to-video (TTV) generation models. Current model architecture designs are bifurcated into 2 categories: Diffusion- and Transformer-based models. Our systematic performance characterization on a suite of eight representative TTI/TTV models shows that after state-of-the-art optimization techniques such as Flash Attention are applied, Convolution accounts for up to 44% of execution time for Diffusion-based TTI models, while Linear layers consume up to 49% of execution time for Transformer-based models. We additionally observe that Diffusion-based TTI models resemble the Prefill stage of LLM inference, and benefit from 1.1-2.5x greater speedup from Flash Attention than Transformer-based TTI models that resemble the Decode phase. Since optimizations designed for LLMs do not map directly onto TTI/TTV models, we must conduct a thorough characterization of these workloads to gain insights for new optimization opportunities. In doing so, we define sequence length in the context of TTI/TTV models and observe sequence length can vary up to 4x in Diffusion model inference. We additionally observe temporal aspects of TTV workloads pose unique system bottlenecks, with Temporal Attention accounting for over 60% of total Attention time. Overall, our in-depth system performance characterization is a critical first step towards designing efficient and deployable systems for emerging TTI/TTV workloads.
翻译:随着大规模生成式AI模型从文本(一维)生成拓展至图像(二维)与视频(三维)生成,处理空间与时间信息为质量、性能及效率带来了独特挑战。我们首次提出针对多模态文本到图像(TTI)与文本到视频(TTV)生成模型这一新型系统设计空间的研究。当前模型架构设计分为两类:基于扩散模型与基于Transformer的模型。通过对八种代表性TTI/TTV模型的系统性性能表征,我们发现:在应用Flash Attention等最先进优化技术后,卷积操作在基于扩散的TTI模型中占比高达44%的执行时间,而线性层在基于Transformer的模型中占比达49%。我们还观察到,基于扩散的TTI模型类似于大语言模型推理中的Prefill阶段,其从Flash Attention获得的加速比(1.1-2.5倍)优于类似Decode阶段的基于Transformer的TTI模型。由于针对大语言模型的优化方案无法直接映射至TTI/TTV模型,我们必须对这些工作负载进行深入表征,以洞察新的优化机遇。为此,我们定义了TTI/TTV模型上下文中的序列长度,并观察到扩散模型推理中序列长度可变化达4倍。此外,TTV工作负载的时间维度特征引发了独特的系统瓶颈——时间注意力占总注意力时间的60%以上。总体而言,我们的深度系统性能表征是为新兴TTI/TTV工作负载设计高效可部署系统的关键第一步。