Deep learning methods have been exerting their strengths in long-term time series forecasting. However, they often struggle to strike a balance between expressive power and computational efficiency. Resorting to multi-layer perceptrons (MLPs) provides a compromising solution, yet they suffer from two critical problems caused by the intrinsic point-wise mapping mode, in terms of deficient contextual dependencies and inadequate information bottleneck. Here, we propose the Coarsened Perceptron Network (CP-Net), featured by a coarsening strategy that alleviates the above problems associated with the prototype MLPs by forming information granules in place of solitary temporal points. The CP-Net utilizes primarily a two-stage framework for extracting semantic and contextual patterns, which preserves correlations over larger timespans and filters out volatile noises. This is further enhanced by a multi-scale setting, where patterns of diverse granularities are fused towards a comprehensive prediction. Based purely on convolutions of structural simplicity, CP-Net is able to maintain a linear computational complexity and low runtime, while demonstrates an improvement of 4.1% compared with the SOTA method on seven forecasting benchmarks.
翻译:深度学习方法在长期时间序列预测中展现出强大优势,但往往难以在表达能力与计算效率之间取得平衡。多层感知机(MLP)作为一种折中方案,其固有的逐点映射模式导致两个关键问题:上下文依赖缺失与信息瓶颈不足。为此,我们提出粗化感知网络(CP-Net),通过粗化策略将离散时间点转化为信息粒,从而缓解原型MLP的上述缺陷。CP-Net采用两阶段框架提取语义与上下文模式,既能保留跨更大时间跨度的相关性,又可过滤易变噪声。该能力通过多尺度设置进一步增强,将不同粒度的模式融合以实现全面预测。基于结构简洁的纯卷积运算,CP-Net在维持线性计算复杂度与低运行时间的同时,在七个预测基准上将性能较当前最优方法提升4.1%。