Large language models (LLMs) have been applied in many fields and have developed rapidly in recent years. As a classic machine learning task, time series forecasting has recently been boosted by LLMs. Recent works treat large language models as \emph{zero-shot} time series reasoners without further fine-tuning, which achieves remarkable performance. However, there are some unexplored research problems when applying LLMs for time series forecasting under the zero-shot setting. For instance, the LLMs' preferences for the input time series are less understood. In this paper, by comparing LLMs with traditional time series forecasting models, we observe many interesting properties of LLMs in the context of time series forecasting. First, our study shows that LLMs perform well in predicting time series with clear patterns and trends, but face challenges with datasets lacking periodicity. This observation can be explained by the ability of LLMs to recognize the underlying period within datasets, which is supported by our experiments. In addition, the input strategy is investigated, and it is found that incorporating external knowledge and adopting natural language paraphrases substantially improve the predictive performance of LLMs for time series. Overall, our study contributes insight into LLMs' advantages and limitations in time series forecasting under different conditions.
翻译:近年来,大型语言模型(LLMs)已在众多领域得到应用并快速发展。作为经典的机器学习任务,时间序列预测近期因LLMs的引入而获得显著提升。近期研究将大型语言模型视为无需进一步微调的\emph{零样本}时间序列推理器,并取得了卓越性能。然而,在零样本设定下应用LLMs进行时间序列预测仍存在一些尚未探索的研究问题。例如,LLMs对输入时间序列的偏好尚未被充分理解。本文通过对比LLMs与传统时间序列预测模型,揭示了LLMs在时间序列预测背景下的若干有趣特性。首先,研究表明LLMs在预测具有清晰模式和趋势的时间序列时表现良好,但在处理缺乏周期性的数据集时面临挑战。这一现象可通过LLMs识别数据集中潜在周期的能力得到解释,该结论已通过实验验证。此外,研究探讨了输入策略的影响,发现引入外部知识并采用自然语言转述能显著提升LLMs在时间序列预测中的性能。总体而言,本研究为理解LLMs在不同条件下进行时间序列预测的优势与局限性提供了新的见解。