This research examines the use of Large Language Models (LLMs) in predicting time series, with a specific focus on the LLMTIME model. Despite the established effectiveness of LLMs in tasks such as text generation, language translation, and sentiment analysis, this study highlights the key challenges that large language models encounter in the context of time series prediction. We assess the performance of LLMTIME across multiple datasets and introduce classical almost periodic functions as time series to gauge its effectiveness. The empirical results indicate that while large language models can perform well in zero-shot forecasting for certain datasets, their predictive accuracy diminishes notably when confronted with diverse time series data and traditional signals. The primary finding of this study is that the predictive capacity of LLMTIME, similar to other LLMs, significantly deteriorates when dealing with time series data that contain both periodic and trend components, as well as when the signal comprises complex frequency components.
翻译:本研究探讨了大型语言模型(LLM)在时间序列预测中的应用,特别聚焦于LLMTIME模型。尽管LLM在文本生成、语言翻译和情感分析等任务中已展现出显著成效,但本研究揭示了大型语言模型在时间序列预测背景下所面临的关键挑战。我们通过多个数据集评估了LLMTIME的性能,并引入经典殆周期函数作为时间序列以衡量其有效性。实证结果表明,虽然大型语言模型在部分数据集的零样本预测中表现良好,但在面对多样化时间序列数据及传统信号时,其预测精度显著下降。本研究的主要发现是:当处理同时包含周期分量与趋势分量的时间序列数据,以及信号包含复杂频率分量时,LLMTIME(与其他LLM类似)的预测能力会显著恶化。