The prevailing Direct Forecasting (DF) paradigm dominates Long-term Time Series Forecasting (LTSF) by forcing models to predict the entire future horizon in a single forward pass. While efficient, this rigid coupling of output and evaluation horizons necessitates computationally prohibitive re-training for every target horizon. In this work, we uncover a counter-intuitive optimization anomaly: models trained on short horizons-when coupled with our proposed Evolutionary Forecasting (EF) paradigm-significantly outperform those trained directly on long horizons. We attribute this success to the mitigation of a fundamental optimization pathology inherent in DF, where conflicting gradients from distant futures cripple the learning of local dynamics. We establish EF as a unified generative framework, proving that DF is merely a degenerate special case of EF. Extensive experiments demonstrate that a singular EF model surpasses task-specific DF ensembles across standard benchmarks and exhibits robust asymptotic stability in extreme extrapolation. This work propels a paradigm shift in LTSF: moving from passive Static Mapping to autonomous Evolutionary Reasoning.
翻译:当前主流的直接预测范式在长期时间序列预测领域占据主导地位,其强制模型通过单次前向传播预测整个未来时间范围。尽管效率较高,但这种输出范围与评估范围的刚性耦合导致针对每个目标预测范围都需要进行计算成本高昂的重新训练。本研究发现了一个反直觉的优化异常现象:在短预测范围上训练的模型——当与我们提出的演化预测范式结合时——其性能显著优于直接在长范围上训练的模型。我们将此成功归因于对直接预测范式固有根本性优化病理的缓解,该病理表现为来自遥远未来时刻的冲突梯度会严重损害局部动态特征的学习。我们将演化预测确立为统一的生成式框架,并证明直接预测仅是演化预测的一种退化特例。大量实验表明,单一的演化预测模型在标准基准测试中超越了针对特定任务设计的直接预测集成模型,并在极端外推场景中展现出稳健的渐近稳定性。本研究推动了长期时间序列预测领域的范式转变:从被动的静态映射转向自主的演化推理。