Time series forecasting plays a significant role in finance, energy, meteorology, and IoT applications. Recent studies have leveraged the generalization capabilities of large language models (LLMs) to adapt to time series forecasting, achieving promising performance. However, existing studies focus on token-level modal alignment, instead of bridging the intrinsic modality gap between linguistic knowledge structures and time series data patterns, greatly limiting the semantic representation. To address this issue, we propose a novel Semantic-Enhanced LLM (SE-LLM) that explores the inherent periodicity and anomalous characteristics of time series to embed into the semantic space to enhance the token embedding. This process enhances the interpretability of tokens for LLMs, thereby activating the potential of LLMs for temporal sequence analysis. Moreover, existing Transformer-based LLMs excel at capturing long-range dependencies but are weak at modeling short-term anomalies in time-series data. Hence, we propose a plugin module embedded within self-attention that models long-term and short-term dependencies to effectively adapt LLMs to time-series analysis. Our approach freezes the LLM and reduces the sequence dimensionality of tokens, greatly reducing computational consumption. Experiments demonstrate the superiority performance of our SE-LLM against the state-of-the-art (SOTA) methods.
翻译:时间序列预测在金融、能源、气象和物联网应用中具有重要作用。近期研究利用大型语言模型(LLMs)的泛化能力来适应时间序列预测任务,取得了显著成效。然而,现有研究主要关注于令牌层面的模态对齐,未能弥合语言知识结构与时间序列数据模式之间的内在模态差异,这极大地限制了语义表征能力。为解决这一问题,我们提出了一种新颖的语义增强型大型语言模型(SE-LLM),该模型通过挖掘时间序列固有的周期性与异常特征,将其嵌入语义空间以增强令牌表征。这一过程提升了LLMs对令牌的可解释性,从而激活了LLMs在时序分析中的潜力。此外,现有基于Transformer的LLMs擅长捕捉长程依赖关系,但在建模时间序列数据中的短期异常方面存在不足。因此,我们提出了一种嵌入自注意力机制的可插拔模块,该模块能同时建模长程与短期依赖关系,从而有效适配LLMs进行时间序列分析。我们的方法冻结了LLM参数并降低了令牌序列维度,大幅减少了计算开销。实验结果表明,我们的SE-LLM相较于现有最优(SOTA)方法具有显著性能优势。