Transformers excel in Natural Language Processing (NLP) due to their prowess in capturing long-term dependencies but suffer from exponential resource consumption with increasing sequence lengths. To address these challenges, we propose MCSD model, an efficient language model with linear scaling and fast inference speed. MCSD model leverages diverse feature fusion, primarily through the multi-channel slope and decay (MCSD) block, to robustly represent features. This block comprises slope and decay sections that extract features across diverse temporal receptive fields, facilitating capture of both local and global information. In addition, MCSD block conducts element-wise fusion of diverse features to further enhance the delicate feature extraction capability. For inference, we formulate the inference process into a recurrent representation, slashing space complexity to $O(1)$ and time complexity to $O(N)$ respectively. Our experiments show that MCSD attains higher throughput and lower GPU memory consumption compared to Transformers, while maintaining comparable performance to larger-scale language learning models on benchmark tests. These attributes position MCSD as a promising base for edge deployment and embodied intelligence.
翻译:Transformer模型因其在捕捉长距离依赖关系方面的卓越能力而在自然语言处理(NLP)领域表现出色,但随着序列长度的增加,其资源消耗呈指数级增长。为应对这些挑战,我们提出了MCSD模型,这是一种具有线性缩放和快速推理速度的高效语言模型。MCSD模型主要通过多通道斜率与衰减(MCSD)模块,利用多样化特征融合来鲁棒地表示特征。该模块包含斜率与衰减两部分,能够在不同的时间感受野上提取特征,从而有助于捕捉局部和全局信息。此外,MCSD模块对多样化特征进行逐元素融合,以进一步增强精细特征提取能力。在推理方面,我们将推理过程表述为一种循环表示,从而将空间复杂度大幅降低至$O(1)$,时间复杂度降低至$O(N)$。我们的实验表明,与Transformer相比,MCSD在基准测试中实现了更高的吞吐量和更低的GPU内存消耗,同时保持了与更大规模语言学习模型相当的性能。这些特性使MCSD成为边缘部署和具身智能领域一个有前景的基础模型。