The success of large language models (LLMs) for time series has been demonstrated in previous work. Utilizing a symbolic time series representation, one can efficiently bridge the gap between LLMs and time series. However, the remaining challenge is to exploit the semantic information hidden in time series by using symbols or existing tokens of LLMs, while aligning the embedding space of LLMs according to the hidden information of time series. The symbolic time series approximation (STSA) method called adaptive Brownian bridge-based symbolic aggregation (ABBA) shows outstanding efficacy in preserving salient time series features by modeling time series patterns in terms of amplitude and period while using existing tokens of LLMs. In this paper, we introduce a method, called LLM-ABBA, that integrates ABBA into large language models for various downstream time series tasks. By symbolizing time series, LLM-ABBA compares favorably to the recent state-of-the-art (SOTA) in UCR and three medical time series classification tasks. Meanwhile, a fixed-polygonal chain trick in ABBA is introduced to avoid obvious drifting during forecasting tasks by significantly mitigating the effects of cumulative error arising from misused symbols during the transition from symbols to numerical values. In time series regression tasks, LLM-ABBA achieves the new SOTA on Time Series Extrinsic Regression (TSER) benchmarks. LLM-ABBA also shows competitive forecasting capability compared to recent SOTA time series forecasting results. We believe this framework can also seamlessly extend to other time series tasks. Our simulation code is publicly available at: https://github.com/inEXASCALE/llm-abba
翻译:先前的研究已证明大语言模型(LLM)在时间序列任务中的成功应用。利用符号化时间序列表示,可以有效地弥合LLM与时间序列之间的鸿沟。然而,当前的挑战在于如何利用符号或LLM的现有词元来挖掘时间序列中隐藏的语义信息,同时根据时间序列的隐藏信息对齐LLM的嵌入空间。一种名为基于自适应布朗桥的符号聚合(ABBA)的符号化时间序列近似方法,通过从振幅和周期角度建模时间序列模式,并利用LLM的现有词元,在保留关键时间序列特征方面表现出卓越效能。本文提出一种名为LLM-ABBA的方法,将ABBA集成到大语言模型中,用于各种下游时间序列任务。通过对时间序列进行符号化处理,LLM-ABBA在UCR数据集和三项医疗时间序列分类任务中优于当前最先进(SOTA)方法。同时,我们引入ABBA中的固定多边形链技巧,通过在从符号到数值的转换过程中显著减轻因误用符号而产生的累积误差影响,避免预测任务中出现明显漂移。在时间序列回归任务中,LLM-ABBA在时间序列外部回归(TSER)基准测试中达到了新的SOTA水平。与近期SOTA时间序列预测结果相比,LLM-ABBA也展现出具有竞争力的预测能力。我们相信该框架也能无缝扩展到其他时间序列任务。模拟代码已公开于:https://github.com/inEXASCALE/llm-abba