Time series forecasting always faces the challenge of concept drift, where data distributions evolve over time, leading to a decline in forecast model performance. Existing solutions are based on online learning, which continually organize recent time series observations as new training samples and update model parameters according to the forecasting feedback on recent data. However, they overlook a critical issue: obtaining ground-truth future values of each sample should be delayed until after the forecast horizon. This delay creates a temporal gap between the training samples and the test sample. Our empirical analysis reveals that the gap can introduce concept drift, causing forecast models to adapt to outdated concepts. In this paper, we present \textsc{Proceed}, a novel proactive model adaptation framework for online time series forecasting. \textsc{Proceed} first operates by estimating the concept drift between the recently used training samples and the current test sample. It then employs an adaptation generator to efficiently translate the estimated drift into parameter adjustments, proactively adapting the model to the test sample. To enhance the generalization capability of the framework, \textsc{Proceed} is trained on synthetic diverse concept drifts. We conduct extensive experiments on five real-world datasets across various forecast models. The empirical study demonstrates that our proposed \textsc{Proceed} brings more performance improvements than the state-of-the-art online learning methods, significantly facilitating forecast models' resilience against concept drifts.
翻译:时间序列预测始终面临概念漂移的挑战,即数据分布随时间演变,导致预测模型性能下降。现有解决方案基于在线学习,其持续将近期时间序列观测值组织为新的训练样本,并根据对近期数据的预测反馈更新模型参数。然而,这些方法忽视了一个关键问题:获取每个样本的真实未来值必须延迟到预测时域之后。这种延迟在训练样本与测试样本之间造成了时间间隙。我们的实证分析表明,该间隙可能引入概念漂移,导致预测模型适应过时的概念。本文提出 \textsc{Proceed},一种用于在线时间序列预测的新型主动模型自适应框架。\textsc{Proceed} 首先通过估计近期使用的训练样本与当前测试样本之间的概念漂移来运作。随后,它采用一个自适应生成器,将估计的漂移高效地转化为参数调整,从而主动使模型适应测试样本。为增强框架的泛化能力,\textsc{Proceed} 在合成的多样化概念漂移上进行训练。我们在五个真实世界数据集上针对多种预测模型进行了广泛实验。实证研究表明,我们提出的 \textsc{Proceed} 相较于最先进的在线学习方法能带来更大的性能提升,显著增强了预测模型对概念漂移的鲁棒性。