Downsampling-based methods for time series forecasting have attracted increasing attention due to their superiority in capturing sequence trends. However, this approaches mainly capture dependencies within subsequences but neglect inter-subsequence and inter-channel interactions, which limits forecasting accuracy. To address these limitations, we propose CTPNet, a novel framework that explicitly learns representations from three perspectives: i) inter-channel dependencies, captured by a temporal query-based multi-head attention mechanism; ii) intra-subsequence dependencies, modeled via a Transformer to characterize trend variations; and iii) inter-subsequence dependencies, extracted by reusing the encoder with residual connections to capture global periodic patterns. By jointly integrating these levels, proposed method provides a more holistic representation of temporal dynamics. Extensive experiments demonstrate the superiority of the proposed method.
翻译:基于下采样的时间序列预测方法因其在捕捉序列趋势方面的优越性而日益受到关注。然而,这类方法主要捕获子序列内部的依赖关系,却忽略了子序列间以及通道间的相互作用,从而限制了预测精度。为应对这些局限,我们提出了CTPNet,这是一个新颖的框架,它从三个视角显式地学习表征:i) 通道间依赖关系,通过一种基于时序查询的多头注意力机制捕获;ii) 子序列内依赖关系,通过Transformer建模以刻画趋势变化;iii) 子序列间依赖关系,通过复用带有残差连接的编码器来提取,以捕捉全局周期性模式。通过联合集成这三个层面的信息,所提出的方法能够提供对时序动态更全面的表征。大量实验证明了该方法的优越性。