Recent research on time series foundation models has primarily focused on forecasting, leaving it unclear how generalizable their learned representations are. In this study, we examine whether frozen pre-trained forecasting models can provide effective representations for classification. To this end, we compare different representation extraction strategies and introduce two model-agnostic embedding augmentations. Our experiments show that the best forecasting models achieve classification accuracy that matches or even surpasses that of state-of-the-art models pre-trained specifically for classification. Moreover, we observe a positive correlation between forecasting and classification performance. These findings challenge the assumption that task-specific pre-training is necessary, and suggest that learning to forecast may provide a powerful route toward constructing general-purpose time series foundation models.
翻译:近期关于时间序列基础模型的研究主要集中于预测任务,其学习到的表征是否具有泛化性尚不明确。本研究探讨冻结的预训练预测模型能否为分类任务提供有效表征。为此,我们比较了不同的表征提取策略,并引入了两种模型无关的嵌入增强方法。实验表明,最优的预测模型在分类准确率上达到甚至超越了专门为分类任务预训练的先进模型。此外,我们观察到预测性能与分类性能之间存在正相关关系。这些发现挑战了任务特定预训练必要性的假设,并表明学习预测可能为构建通用时间序列基础模型提供一条有效途径。