The prevailing Direct Forecasting (DF) paradigm dominates Long-term Time Series Forecasting (LTSF) by forcing models to predict the entire future horizon in a single forward pass. While efficient, this rigid coupling of output and evaluation horizons necessitates computationally prohibitive re-training for every target horizon. In this work, we uncover a counter-intuitive optimization anomaly: models trained on short horizons-when coupled with our proposed Evolutionary Forecasting (EF) paradigm-significantly outperform those trained directly on long horizons. We attribute this success to the mitigation of a fundamental optimization pathology inherent in DF, where conflicting gradients from distant futures cripple the learning of local dynamics. We establish EF as a unified generative framework, proving that DF is merely a degenerate special case of EF. Extensive experiments demonstrate that a singular EF model surpasses task-specific DF ensembles across standard benchmarks and exhibits robust asymptotic stability in extreme extrapolation. This work propels a paradigm shift in LTSF: moving from passive Static Mapping to autonomous Evolutionary Reasoning.
翻译:当前主导长期时间序列预测领域的直接预测范式,通过强制模型在单次前向传播中预测整个未来时间范围,虽具效率,但其输出范围与评估范围的刚性耦合,导致针对不同目标范围均需进行计算成本高昂的重新训练。本研究发现一个反直觉的优化异常现象:在短预测范围上训练的模型——当与我们提出的进化式预测范式结合时——其性能显著优于直接在长范围上训练的模型。我们将此成功归因于缓解了直接预测范式固有的一个根本性优化缺陷,即来自遥远未来的冲突梯度会严重阻碍局部动态特征的学习。我们将进化式预测确立为一个统一的生成式框架,并证明直接预测仅是进化式预测的一个退化特例。大量实验表明,单一的进化式预测模型在标准基准测试中超越了针对特定任务设计的直接预测集成模型,并在极端外推场景中展现出稳健的渐近稳定性。此项研究推动了长期时间序列预测领域的范式转变:从被动的静态映射转向自主的进化推理。