Motivated by the recent success of time-series foundation models for zero-shot forecasting, we present a methodology for $\textit{in-context fine-tuning}$ of a time-series foundation model. In particular, we design a pretrained foundation model that can be prompted (at inference time) with multiple time-series examples, in order to forecast a target time-series into the future. Our foundation model is specifically trained to utilize examples from multiple related time-series in its context window (in addition to the history of the target time-series) to help it adapt to the specific distribution of the target domain at inference time. We show that such a foundation model that uses in-context examples at inference time can obtain much better performance on popular forecasting benchmarks compared to supervised deep learning methods, statistical models, as well as other time-series foundation models. Interestingly, our in-context fine-tuning approach even rivals the performance of a foundation model that is explicitly fine-tuned on the target domain.
翻译:受近期时间序列基础模型在零样本预测方面成功的启发,我们提出了一种用于时间序列基础模型$\textit{基于上下文微调}$的方法。具体而言,我们设计了一个预训练的基础模型,该模型能够在推理时通过提示多个时间序列示例,来预测目标时间序列的未来值。我们的基础模型经过专门训练,以利用其上下文窗口中的多个相关时间序列示例(除了目标时间序列的历史数据),帮助其在推理时适应目标领域的特定分布。我们证明,与监督深度学习方法、统计模型以及其他时间序列基础模型相比,这种在推理时使用上下文示例的基础模型能够在流行的预测基准上获得更好的性能。有趣的是,我们的基于上下文微调方法甚至能与在目标领域上显式微调的基础模型的性能相媲美。