Time series forecasting plays a critical role in decision-making across many real-world applications. Unlike data in vision and language domains, time series data is inherently tied to the evolution of underlying processes and can only accumulate as real-world time progresses, limiting the effectiveness of scale-driven pretraining alone. This time-bound constraint poses a challenge for enabling large language models (LLMs) to acquire forecasting capability, as existing approaches primarily rely on representation-level alignment or inference-time temporal modules rather than explicitly teaching forecasting behavior to the LLM. We propose T-LLM, a temporal distillation framework that equips general-purpose LLMs with time series forecasting capability by transferring predictive behavior from a lightweight temporal teacher during training. The teacher combines trend modeling and frequency-domain analysis to provide structured temporal supervision, and is removed entirely at inference, leaving the LLM as the sole forecasting model. Experiments on benchmark datasets and infectious disease forecasting tasks demonstrate that T-LLM consistently outperforms existing LLM-based forecasting methods under full-shot, few-shot, and zero-shot settings, while enabling a simple and efficient deployment pipeline.
翻译:时间序列预测在许多现实世界应用的决策过程中起着至关重要的作用。与视觉和语言领域的数据不同,时间序列数据本质上与底层过程的演化相关联,并且只能随着现实世界时间的推移而积累,这限制了仅靠规模驱动的预训练的有效性。这种时间约束给大语言模型获得预测能力带来了挑战,因为现有方法主要依赖于表示层面的对齐或推理时的时序模块,而非明确地向LLM教导预测行为。我们提出了T-LLM,一种时序蒸馏框架,通过在训练期间从轻量级时序教师模型转移预测行为,使通用大语言模型具备时间序列预测能力。该教师模型结合趋势建模和频域分析以提供结构化的时序监督,并在推理时被完全移除,仅保留LLM作为唯一的预测模型。在基准数据集和传染病预测任务上的实验表明,T-LLM在全样本、少样本和零样本设置下均持续优于现有的基于LLM的预测方法,同时实现了简单高效的部署流程。