Large-scale foundation models have demonstrated exceptional performance in language and vision tasks. However, the numerous dense matrix-vector operations involved in these large networks pose significant computational challenges during inference. To address these challenges, we introduce the Block-Level Adaptive STructured (BLAST) matrix, designed to learn and leverage efficient structures prevalent in the weight matrices of linear layers within deep learning models. Compared to existing structured matrices, the BLAST matrix offers substantial flexibility, as it can represent various types of structures that are either learned from data or computed from pre-existing weight matrices. We demonstrate the efficiency of using the BLAST matrix for compressing both language and vision tasks, showing that (i) for medium-sized models such as ViT and GPT-2, training with BLAST weights boosts performance while reducing complexity by 70\% and 40\%, respectively; and (ii) for large foundation models such as Llama-7B and DiT-XL, the BLAST matrix achieves a 2x compression while exhibiting the lowest performance degradation among all tested structured matrices. Our code is available at \url{https://github.com/changwoolee/BLAST}.
翻译:大规模基础模型在语言和视觉任务中展现出卓越性能。然而,这些大型网络中涉及的大量稠密矩阵-向量运算在推理过程中带来了显著的计算挑战。为应对这些挑战,我们提出了块级自适应结构化(BLAST)矩阵,其旨在学习并利用深度学习模型线性层权重矩阵中普遍存在的高效结构。与现有结构化矩阵相比,BLAST矩阵具有显著灵活性,因为它能够表示多种类型的结构,这些结构既可从数据中学习,也可基于已有权重矩阵计算得到。我们证明了使用BLAST矩阵压缩语言和视觉任务的高效性,结果表明:(i)对于ViT和GPT-2等中等规模模型,使用BLAST权重进行训练可在分别降低70%和40%复杂度的同时提升性能;(ii)对于Llama-7B和DiT-XL等大型基础模型,BLAST矩阵实现了2倍压缩,且在测试的所有结构化矩阵中表现出最低的性能损失。我们的代码公开于\url{https://github.com/changwoolee/BLAST}。