Large pre-trained models have been vital in recent advancements in domains like language and vision, making model training for individual downstream tasks more efficient and provide superior performance. However, tackling time-series analysis tasks usually involves designing and training a separate model from scratch leveraging training data and domain expertise specific to the task. We tackle a significant challenge for pre-training a foundational time-series model from multi-domain time-series datasets: extracting semantically useful tokenized inputs to the model across heterogenous time-series from different domains. We propose Large Pre-trained Time-series Models (LPTM) that introduces a novel method of \textit{adaptive segmentation} that automatically identifies optimal dataset-specific segmentation strategy during pre-training. This enables LPTM to perform similar to or better than domain-specific state-of-art model when fine-tuned to different downstream time-series analysis tasks and under zero-shot settings. LPTM achieves superior forecasting and time-series classification results taking up to 40% less data and 50% less training time compared to state-of-art baselines.
翻译:大规模预训练模型在语言和视觉等领域的最新进展中至关重要,它们使针对单个下游任务的模型训练更加高效,并提供更优的性能。然而,处理时序分析任务通常需要利用特定任务的训练数据和领域专业知识,从头开始设计和训练单独的模型。我们解决了从多领域时序数据集中预训练基础时序模型的一个重大挑战:如何从不同领域的异构时序数据中提取对模型具有语义意义的标记化输入。我们提出了大规模预训练时序模型(LPTM),该模型引入了一种新颖的\textit{自适应分割}方法,能在预训练期间自动识别最优的数据集特定分割策略。这使得LPTM在微调到不同下游时序分析任务时,以及在零样本设置下,能够达到与领域特定最先进模型相当或更优的性能。与最先进的基线模型相比,LPTM在实现更优的预测和时序分类结果的同时,所需数据量最多减少40%,训练时间最多减少50%。