Although contrastive and other representation-learning methods have long been explored in vision and NLP, their adoption in modern time series forecasters remains limited. We believe they hold strong promise for this domain. To unlock this potential, we explicitly align past and future representations, thereby bridging the distributional gap between input histories and future targets. To this end, we introduce TimeAlign, a lightweight, plug-and-play framework that establishes a new representation paradigm, distinct from contrastive learning, by aligning auxiliary features via a simple reconstruction task and feeding them back into any base forecaster. Extensive experiments across eight benchmarks verify its superior performance. Further studies indicate that the gains arise primarily from correcting frequency mismatches between historical inputs and future outputs. Additionally, we provide two theoretical justifications for how reconstruction improves forecasting generalization and how alignment increases the mutual information between learned representations and predicted targets. The code is available at https://github.com/TROUBADOUR000/TimeAlign.
翻译:尽管对比学习及其他表征学习方法在视觉与自然语言处理领域已得到长期探索,其在现代时间序列预测模型中的应用仍较为有限。我们认为这类方法在该领域具有巨大潜力。为释放其潜力,我们通过显式对齐过去与未来的表征,弥合输入历史序列与未来目标之间的分布差异。为此,我们提出TimeAlign——一个轻量级即插即用框架,该框架通过简单的重构任务对齐辅助特征并将其反馈至任意基础预测器,从而建立了一种区别于对比学习的全新表征范式。在八个基准数据集上的大量实验验证了其卓越性能。进一步研究表明,性能提升主要源于校正历史输入与未来输出之间的频率失配问题。此外,我们从理论上论证了重构任务如何提升预测泛化能力,以及对齐操作如何增加学习表征与预测目标之间的互信息。代码已发布于https://github.com/TROUBADOUR000/TimeAlign。