How to best develop foundational models for time series forecasting remains an important open question. Tokenization is a crucial consideration in this effort: what is an effective discrete vocabulary for a real-valued sequential input? To address this question, we develop WaveToken, a wavelet-based tokenizer that allows models to learn complex representations directly in the space of time-localized frequencies. Our method first scales and decomposes the input time series, then thresholds and quantizes the wavelet coefficients, and finally pre-trains an autoregressive model to forecast coefficients for the forecast horizon. By decomposing coarse and fine structures in the inputs, wavelets provide an eloquent and compact language for time series forecasting that simplifies learning. Empirical results on a comprehensive benchmark, including 42 datasets for both in-domain and zero-shot settings, show that WaveToken: i) provides better accuracy than recently proposed foundation models for forecasting while using a much smaller vocabulary (1024 tokens), and performs on par or better than modern deep learning models trained specifically on each dataset; and ii) exhibits superior generalization capabilities, achieving the best average rank across all datasets for three complementary metrics. In addition, we show that our method can easily capture complex temporal patterns of practical relevance that are challenging for other recent pre-trained models, including trends, sparse spikes, and non-stationary time series with varying frequencies evolving over time.
翻译:如何为时间序列预测开发最佳的基础模型仍是一个重要的开放性问题。在此过程中,标记化是一个关键考量:对于实值序列输入,何种离散词汇表是有效的?为解决这一问题,我们开发了WaveToken——一种基于小波的标记器,使模型能够在时间局部频率空间中直接学习复杂表示。我们的方法首先对输入时间序列进行缩放与分解,随后对小波系数进行阈值处理与量化,最后预训练一个自回归模型以预测未来时间窗的系数。通过分解输入中的粗粒度与细粒度结构,小波为时间序列预测提供了一种简洁紧凑的语言,从而简化了学习过程。在包含42个数据集的综合基准测试(涵盖域内与零样本设置)上的实证结果表明,WaveToken具备以下优势:i) 在使用更小词汇表(1024个标记)的情况下,其预测精度优于近期提出的预测基础模型,且与针对各数据集专门训练的现代深度学习模型表现相当或更优;ii) 展现出卓越的泛化能力,在三个互补性指标上均取得所有数据集的最佳平均排名。此外,我们证明该方法能轻松捕获具有实际意义的复杂时序模式(包括趋势、稀疏尖峰以及频率随时间演变的非平稳时间序列),这些模式对其他近期预训练模型而言具有挑战性。