Recent years have witnessed the success of introducing deep learning models to time series forecasting. From a data generation perspective, we illustrate that existing models are susceptible to distribution shifts driven by temporal contexts, whether observed or unobserved. Such context-driven distribution shift (CDS) introduces biases in predictions within specific contexts and poses challenges for conventional training paradigms. In this paper, we introduce a universal calibration methodology for the detection and adaptation of CDS with a trained model. To this end, we propose a novel CDS detector, termed the "residual-based CDS detector" or "Reconditionor", which quantifies the model's vulnerability to CDS by evaluating the mutual information between prediction residuals and their corresponding contexts. A high Reconditionor score indicates a severe susceptibility, thereby necessitating model adaptation. In this circumstance, we put forth a straightforward yet potent adapter framework for model calibration, termed the "sample-level contextualized adapter" or "SOLID". This framework involves the curation of a contextually similar dataset to the provided test sample and the subsequent fine-tuning of the model's prediction layer with a limited number of steps. Our theoretical analysis demonstrates that this adaptation strategy can achieve an optimal bias-variance trade-off. Notably, our proposed Reconditionor and SOLID are model-agnostic and readily adaptable to a wide range of models. Extensive experiments show that SOLID consistently enhances the performance of current forecasting models on real-world datasets, especially on cases with substantial CDS detected by the proposed Reconditionor, thus validating the effectiveness of the calibration approach.
翻译:近年来,深度学习模型在时间序列预测领域取得了显著成功。从数据生成的角度出发,我们阐释了现有模型容易受到时间上下文(无论是否可观测)驱动的分布偏移的影响。此类上下文驱动的分布偏移会在特定上下文中引入预测偏差,并对传统训练范式构成挑战。本文提出一种通用的校准方法,用于在已训练模型的基础上检测并适应上下文驱动的分布偏移。为此,我们设计了一种新颖的上下文驱动分布偏移检测器,称为“基于残差的上下文驱动分布偏移检测器”或“Reconditionor”。该检测器通过评估预测残差与其对应上下文之间的互信息,量化模型对上下文驱动分布偏移的脆弱性。较高的Reconditionor分数表明模型存在严重的易受影响性,从而需要进行模型适应。在此情况下,我们提出了一种简洁而有效的适配器框架用于模型校准,称为“样本级上下文适配器”或“SOLID”。该框架包括为给定的测试样本构建上下文相似的数据集,并随后以有限的步数对模型的预测层进行微调。我们的理论分析表明,这种适应策略能够实现最优的偏差-方差权衡。值得注意的是,我们提出的Reconditionor和SOLID具有模型无关性,可轻松适配于多种模型。大量实验表明,SOLID能够持续提升现有预测模型在真实世界数据集上的性能,尤其是在通过Reconditionor检测到显著上下文驱动分布偏移的案例中,从而验证了该校准方法的有效性。