Recently, Transformer-based models for long sequence time series forecasting have demonstrated promising results. The self-attention mechanism as the core component of these Transformer-based models exhibits great potential in capturing various dependencies among data points. Despite these advancements, it has been a subject of concern to improve the efficiency of the self-attention mechanism. Unfortunately, current specific optimization methods are facing the challenges in applicability and scalability for the future design of long sequence time series forecasting models. Hence, in this article, we propose a novel architectural framework that enhances Transformer-based models through the integration of Surrogate Attention Blocks (SAB) and Surrogate Feed-Forward Neural Network Blocks (SFB). The framework reduces both time and space complexity by the replacement of the self-attention and feed-forward layers with SAB and SFB while maintaining their expressive power and architectural advantages. The equivalence of this substitution is fully demonstrated. The extensive experiments on 10 Transformer-based models across five distinct time series tasks demonstrate an average performance improvement of 12.4%, alongside 61.3% reduction in parameter counts.
翻译:近年来,基于Transformer的长序列时间序列预测模型已展现出良好的性能。作为这些Transformer模型核心组件的自注意力机制,在捕捉数据点间多种依赖关系方面显示出巨大潜力。尽管取得了这些进展,如何提升自注意力机制的效率仍是值得关注的问题。遗憾的是,当前特定的优化方法在未来长序列时间序列预测模型的设计中,面临着适用性和可扩展性方面的挑战。为此,本文提出一种新颖的架构框架,通过集成代理注意力模块(SAB)和代理前馈神经网络模块(SFB)来增强基于Transformer的模型。该框架通过用SAB和SFB分别替代自注意力层和前馈层,在保持其表达能力和架构优势的同时,降低了时间和空间复杂度。这种替代的等价性得到了充分论证。在五种不同时间序列任务上对10个基于Transformer的模型进行的广泛实验表明,平均性能提升了12.4%,同时参数量减少了61.3%。