Massive Over-activation Yielded Uplifts(MOYU) is an inherent property of large language models, and dynamic activation(DA) based on the MOYU property is a clever yet under-explored strategy designed to accelerate inference in these models. Existing methods that utilize MOYU often face a significant 'Impossible Trinity': struggling to simultaneously maintain model performance, enhance inference speed, and extend applicability across various architectures. Due to the theoretical ambiguities surrounding MOYU, this paper elucidates the root cause of the MOYU property and outlines the mechanisms behind two primary limitations encountered by current DA methods: 1) history-related activation uncertainty, and 2) semantic-irrelevant activation inertia. Our analysis not only underscores the limitations of current dynamic activation strategies within large-scale LLaMA models but also proposes opportunities for refining the design of future sparsity schemes.
翻译:大规模过激活提升(MOYU)是大型语言模型的一种固有特性,而基于MOYU特性的动态激活(DA)是一种巧妙但尚未被充分探索的、旨在加速此类模型推理的策略。现有利用MOYU的方法常常面临一个显著的“不可能三角”:难以同时维持模型性能、提升推理速度以及扩展其在不同架构上的适用性。由于围绕MOYU的理论模糊性,本文阐明了MOYU特性的根本原因,并概述了当前DA方法所遇到的两个主要局限背后的机制:1)与历史相关的激活不确定性,以及2)语义无关的激活惯性。我们的分析不仅强调了当前动态激活策略在大型LLaMA模型中的局限性,也为未来稀疏性方案的设计改进提出了机遇。