Predicting multivariate time series is crucial, demanding precise modeling of intricate patterns, including inter-series dependencies and intra-series variations. Distinctive trend characteristics in each time series pose challenges, and existing methods, relying on basic moving average kernels, may struggle with the non-linear structure and complex trends in real-world data. Given that, we introduce a learnable decomposition strategy to capture dynamic trend information more reasonably. Additionally, we propose a dual attention module tailored to capture inter-series dependencies and intra-series variations simultaneously for better time series forecasting, which is implemented by channel-wise self-attention and autoregressive self-attention. To evaluate the effectiveness of our method, we conducted experiments across eight open-source datasets and compared it with the state-of-the-art methods. Through the comparison results, our Leddam (LEarnable Decomposition and Dual Attention Module) not only demonstrates significant advancements in predictive performance, but also the proposed decomposition strategy can be plugged into other methods with a large performance-boosting, from 11.87% to 48.56% MSE error degradation.
翻译:多变量时间序列预测至关重要,其需要精确建模包括序列间依赖与序列内变化在内的复杂模式。各时间序列独特的趋势特征带来了挑战,而现有方法依赖基础的移动平均核,可能难以处理现实数据中的非线性结构与复杂趋势。为此,我们引入了一种可学习的分解策略,以更合理地捕捉动态趋势信息。此外,我们提出了一个双注意力模块,专门用于同时捕获序列间依赖与序列内变化,以实现更好的时间序列预测;该模块通过通道自注意力与自回归自注意力实现。为评估方法的有效性,我们在八个开源数据集上进行了实验,并与最先进的方法进行了比较。对比结果表明,我们的Leddam(可学习分解与双注意力模块)不仅在预测性能上展现出显著提升,而且所提出的分解策略能够嵌入其他方法中,带来大幅性能提升——均方误差降低了11.87%至48.56%。