This work studies the problem of time series analysis with generalist (or foundation) models, which are models trained across many data domains. Drawing inspiration from the widespread success of large language models, we consider the simple strategy of discretely tokenizing time series data drawn from a myriad of datasets via self-supervision, then using the fixed tokenization to solve a variety of tasks across many data domains. Canonically, time series models are either trained on a single dataset or built in a task-specific manner (e.g., a forecasting-only model), where many use patches of time as inputs to the model. As such, performant generalist, discrete representation time series models explored across many tasks are of value. Our method, TOkenized Time Series EMbeddings (TOTEM), produces such generalist time series models with minimal or no fine-tuning while exhibiting strong zero-shot performance. We evaluate TOTEM extensively over nearly 500 experiments on three commonly-studied time series tasks with real-world data: imputation (17 baselines, 12 datasets), anomaly detection (19 baselines, 25 datasets), and forecasting (14 baselines, 12 datasets). We conclude that TOTEM matches or outperforms existing state-of-the-art models in both the canonical specialist setting (i.e., training one model on one domain) as well as the generalist setting (i.e., training a single model on many domains), which demonstrates the efficacy of tokenization for general time series analysis. The open-source implementation is available here: https://github.com/SaberaTalukder/TOTEM; a video summary is available here: https://www.youtube.com/watch?v=OqrCpdb6MJk.
翻译:本研究探讨利用通用(或基础)模型进行时间序列分析的问题,此类模型是在多个数据域上训练得到的。受大语言模型广泛成功的启发,我们考虑采用一种简单策略:通过自监督方式对来自海量数据集的时间序列数据进行离散令牌化,然后利用固定的令牌化方案解决跨多个数据域的各种任务。传统上,时间序列模型要么在单一数据集上训练,要么以任务特定方式构建(例如仅用于预测的模型),其中许多模型将时间片段作为输入。因此,探索跨多种任务的、具有高性能的通用离散表示时间序列模型具有重要价值。我们提出的方法——令牌化时间序列嵌入(TOTEM),能够以极少或无需微调的方式构建此类通用时间序列模型,同时展现出强大的零样本性能。我们在三个常用真实世界数据时间序列任务上进行了近500次实验的广泛评估:插补(17个基线模型,12个数据集)、异常检测(19个基线模型,25个数据集)和预测(14个基线模型,12个数据集)。实验结果表明,TOTEM在经典专用场景(即在单一域上训练单个模型)和通用场景(即在多域上训练单一模型)中均达到或超越了现有最先进模型的性能,这证明了令牌化策略在通用时间序列分析中的有效性。开源实现详见:https://github.com/SaberaTalukder/TOTEM;视频摘要详见:https://www.youtube.com/watch?v=OqrCpdb6MJk。