Predicting multivariate time series is crucial, demanding precise modeling of intricate patterns, including inter-series dependencies and intra-series variations. Distinctive trend characteristics in each time series pose challenges, and existing methods, relying on basic moving average kernels, may struggle with the non-linear structure and complex trends in real-world data. Given that, we introduce a learnable decomposition strategy to capture dynamic trend information more reasonably. Additionally, we propose a dual attention module tailored to capture inter-series dependencies and intra-series variations simultaneously for better time series forecasting, which is implemented by channel-wise self-attention and autoregressive self-attention. To evaluate the effectiveness of our method, we conducted experiments across eight open-source datasets and compared it with the state-of-the-art methods. Through the comparison results, our Leddam (LEarnable Decomposition and Dual Attention Module) not only demonstrates significant advancements in predictive performance, but also the proposed decomposition strategy can be plugged into other methods with a large performance-boosting, from 11.87% to 48.56% MSE error degradation.
翻译:预测多变量时间序列至关重要,这要求精确建模包括序列间依赖与序列内变分在内的复杂模式。每个时间序列独特的趋势特征带来了挑战,而现有方法依赖基础移动平均核,可能难以处理真实数据中的非线性结构与复杂趋势。为此,我们引入可学习分解策略以更合理地捕捉动态趋势信息。此外,我们提出一种定制化的双注意力模块,通过通道维自注意力与自回归自注意力同时捕获序列间依赖与序列内变分,以实现更优的时间序列预测。为评估方法有效性,我们在八个开源数据集上开展实验,并与当前最优方法进行比较。对比结果表明,我们的Leddam(可学习分解与双注意力模块)不仅在预测性能上展现出显著进步,且所提分解策略可嵌入其他方法中带来性能大幅提升,MSE误差降低幅度达11.87%至48.56%。