Foundation models have emerged as a promising approach in time series forecasting (TSF). Existing approaches either repurpose large language models (LLMs) or build large-scale time series datasets to develop TSF foundation models for universal forecasting. However, these methods face challenges due to the severe cross-domain gap or in-domain heterogeneity. This paper explores a new road to building a TSF foundation model from rich, high-quality natural images. Our key insight is that a visual masked autoencoder, pre-trained on the ImageNet dataset, can naturally be a numeric series forecaster. By reformulating TSF as an image reconstruction task, we bridge the gap between image pre-training and TSF downstream tasks. Surprisingly, without further adaptation in the time-series domain, the proposed VisionTS could achieve superior zero-shot forecasting performance compared to existing TSF foundation models. With fine-tuning for one epoch, VisionTS could further improve the forecasting and achieve state-of-the-art performance in most cases. Extensive experiments reveal intrinsic similarities between images and real-world time series, suggesting visual models may offer a ``free lunch'' for TSF and highlight the potential for future cross-modality research. Our code is publicly available at https://github.com/Keytoyze/VisionTS.
翻译:基础模型已成为时间序列预测领域的一种前景广阔的方法。现有方法要么重新利用大型语言模型,要么构建大规模时间序列数据集来开发通用的预测基础模型。然而,这些方法因严重的跨领域差异或领域内异质性而面临挑战。本文探索了一条利用丰富、高质量自然图像构建时间序列预测基础模型的新路径。我们的核心见解是,在ImageNet数据集上预训练的视觉掩码自编码器可以自然地成为数值序列预测器。通过将时间序列预测重新构建为图像重建任务,我们弥合了图像预训练与时间序列预测下游任务之间的鸿沟。令人惊讶的是,无需在时间序列领域进行额外适配,所提出的VisionTS即可实现优于现有时间序列预测基础模型的零样本预测性能。经过一个周期的微调后,VisionTS能进一步提升预测能力,并在大多数情况下达到最先进的性能。大量实验揭示了图像与真实世界时间序列之间的内在相似性,表明视觉模型可能为时间序列预测提供“免费午餐”,并凸显了未来跨模态研究的潜力。我们的代码公开于https://github.com/Keytoyze/VisionTS。