Online continual learning (OCL) methods adapt to changing environments without forgetting past knowledge. Similarly, online time series forecasting (OTSF) is a real-world problem where data evolve in time and success depends on both rapid adaptation and long-term memory. Indeed, time-varying and regime-switching forecasting models have been extensively studied, offering a strong justification for the use of OCL in these settings. Building on recent work that applies OCL to OTSF, this paper aims to strengthen the theoretical and practical connections between time series methods and OCL. First, we reframe neural network optimization as a parameter filtering problem, showing that natural gradient descent is a score-driven method and proving its information-theoretic optimality. Then, we show that using a Student's t likelihood in addition to natural gradient induces a bounded update, which improves robustness to outliers. Finally, we introduce Natural Score-driven Replay (NatSR), which combines our robust optimizer with a replay buffer and a dynamic scale heuristic that improves fast adaptation at regime drifts. Empirical results demonstrate that NatSR achieves stronger forecasting performance than more complex state-of-the-art methods.
翻译:在线持续学习(OCL)方法能够适应不断变化的环境,同时不遗忘过去的知识。类似地,在线时间序列预测(OTSF)是一个现实世界中的问题,其中数据随时间演变,其成功既依赖于快速适应,也依赖于长期记忆。事实上,时变和状态切换的预测模型已被广泛研究,这为在这些场景中使用OCL提供了强有力的依据。基于最近将OCL应用于OTSF的研究,本文旨在加强时间序列方法与OCL之间的理论和实践联系。首先,我们将神经网络优化重新定义为参数滤波问题,证明自然梯度下降是一种得分驱动方法,并从信息论角度证明了其最优性。接着,我们展示了在自然梯度基础上使用学生t分布似然可以诱导出有界更新,从而提高了对异常值的鲁棒性。最后,我们提出了自然得分驱动回放(NatSR),该方法将我们提出的鲁棒优化器与回放缓冲区以及一种动态尺度启发式方法相结合,以提升在状态漂移时的快速适应能力。实证结果表明,NatSR比更复杂的现有先进方法实现了更强的预测性能。