Large language models have demonstrated exceptional performance across a wide range of tasks. However, dense models usually suffer from sparse activation, where many activation values tend towards zero (i.e., being inactivated). We argue that this could restrict the efficient exploration of model representation space. To mitigate this issue, we propose Finedeep, a deep-layered fine-grained expert architecture for dense models. Our framework partitions the feed-forward neural network layers of traditional dense models into small experts, arranges them across multiple sub-layers. A novel routing mechanism is proposed to determine each expert's contribution. We conduct extensive experiments across various model sizes, demonstrating that our approach significantly outperforms traditional dense architectures in terms of perplexity and benchmark performance while maintaining a comparable number of parameters and floating-point operations. Moreover, we find that Finedeep achieves optimal results when balancing depth and width, specifically by adjusting the number of expert sub-layers and the number of experts per sub-layer. Empirical results confirm that Finedeep effectively alleviates sparse activation and efficiently utilizes representation capacity in dense models.
翻译:大语言模型已在广泛任务中展现出卓越性能。然而,稠密模型通常存在稀疏激活问题,即许多激活值趋于零(即未被激活)。我们认为这会限制模型表示空间的有效探索。为缓解此问题,我们提出Finedeep——一种面向稠密模型的深度分层细粒度专家架构。该框架将传统稠密模型的前馈神经网络层划分为小型专家,并将其分布于多个子层中。我们设计了一种新颖的路由机制以确定各专家的贡献度。我们在多种模型规模上进行了大量实验,结果表明:在保持参数量和浮点运算量相当的前提下,该方法在困惑度和基准测试性能上均显著优于传统稠密架构。此外,我们发现通过调整专家子层数量与每层专家数量以平衡深度与宽度时,Finedeep能取得最优结果。实证结果证实,Finedeep能有效缓解稠密模型中的稀疏激活问题,并高效利用其表示能力。