Neural Forecasters (NFs) have become a cornerstone of Long-term Time Series Forecasting (LTSF). However, recent progress has been hampered by an overemphasis on architectural complexity at the expense of fundamental forecasting principles. In this work, we revisit the principles of LTSF. We begin by formulating a Variance Reduction Hypothesis (VRH), positing that generating and combining multiple forecasts is essential to reducing the inherent uncertainty of NFs. Guided by this, we propose Boosted Direct Output (BDO), a streamlined paradigm that synergistically hybridizes the causal structure of Auto-Regressive (AR) with the stability of Direct Output (DO), while implicitly realizing the principle of forecast combination within a single network. Furthermore, we address the critical validation-test generalization gap by employing parameter smoothing to stabilize optimization. Extensive experiments demonstrate that these trivial yet principled improvements enable a direct temporal MLP to outperform recent, complex state-of-the-art models in nearly all benchmarks, without relying on intricate inductive biases. Finally, we empirically verify our hypothesis, establishing a dynamic performance bound that highlights promising directions for future research. The code for review is available at: https://anonymous.4open.science/r/ReNF-A151.
翻译:神经预测器已成为长期时间序列预测的基石。然而,近期进展因过度强调架构复杂性而忽视了基本预测原则,导致发展受阻。本研究重新审视了长期时间序列预测的核心原则。我们首先提出方差缩减假说,主张生成并组合多个预测对于降低神经预测器固有不确定性至关重要。在此指导下,我们提出增强直接输出范式——一种简化的框架,将自回归的因果结构与直接输出的稳定性协同融合,同时在单一网络内隐式实现预测组合原则。此外,我们通过参数平滑技术稳定优化过程,以解决关键的验证-测试泛化鸿沟问题。大量实验表明,这些简洁而原则性的改进使直接时序多层感知机在几乎所有基准测试中超越了近期复杂的先进模型,且无需依赖复杂的归纳偏置。最后,我们通过实证验证了所提假说,建立了动态性能边界,为未来研究指明了有前景的方向。审阅代码已发布于:https://anonymous.4open.science/r/ReNF-A151。