Time series anomaly detection is essential for the reliable operation of complex systems, but most existing methods require extensive task-specific training. We explore whether time series foundation models (TSFMs), pretrained on large heterogeneous data, can serve as universal backbones for anomaly detection. Through systematic experiments across multiple benchmarks, we compare zero-shot inference, full model adaptation, and parameter-efficient fine-tuning (PEFT) strategies. Our results demonstrate that TSFMs outperform task-specific baselines, achieving notable gains in AUC-PR and VUS-PR, particularly under severe class imbalance. Moreover, PEFT methods such as LoRA, OFT, and HRA not only reduce computational cost but also match or surpass full fine-tuning in most cases, indicating that TSFMs can be efficiently adapted for anomaly detection, even when pretrained for forecasting. These findings position TSFMs as promising general-purpose models for scalable and efficient time series anomaly detection.
翻译:时间序列异常检测对于复杂系统的可靠运行至关重要,但现有方法大多需要大量任务特定训练。本文探讨了在大型异构数据上预训练的时间序列基础模型(TSFMs)能否作为异常检测的通用骨干网络。通过跨多个基准的系统性实验,我们比较了零样本推理、全模型适应以及参数高效微调(PEFT)策略。实验结果表明,TSFMs在AUC-PR和VUS-PR指标上显著优于任务特定基线模型,尤其在严重类别不平衡场景下表现突出。此外,LoRA、OFT和HRA等PEFT方法不仅能降低计算成本,在多数情况下还能达到或超越全参数微调的性能,这表明即使针对预测任务预训练的TSFMs也能高效适应异常检测任务。这些发现确立了TSFMs作为可扩展、高效时间序列异常检测通用模型的潜力。