Large Language Models (LLMs) have recently demonstrated significant potential in the field of time series forecasting, offering impressive capabilities in handling complex temporal data. However, their robustness and reliability in real-world applications remain under-explored, particularly concerning their susceptibility to adversarial attacks. In this paper, we introduce a targeted adversarial attack framework for LLM-based time series forecasting. By employing both gradient-free and black-box optimization methods, we generate minimal yet highly effective perturbations that significantly degrade the forecasting accuracy across multiple datasets and LLM architectures. Our experiments, which include models like TimeGPT and LLM-Time with GPT-3.5, GPT-4, LLaMa, and Mistral, show that adversarial attacks lead to much more severe performance degradation than random noise, and demonstrate the broad effectiveness of our attacks across different LLMs. The results underscore the critical vulnerabilities of LLMs in time series forecasting, highlighting the need for robust defense mechanisms to ensure their reliable deployment in practical applications.
翻译:大语言模型(LLMs)近期在时间序列预测领域展现出显著潜力,在处理复杂时序数据方面表现出令人印象深刻的能力。然而,其在现实应用中的鲁棒性与可靠性仍未得到充分探索,尤其是在面对对抗性攻击时的脆弱性方面。本文针对基于LLM的时间序列预测提出了一种定向对抗攻击框架。通过采用无梯度和黑盒优化方法,我们生成了微小但高效的扰动,这些扰动能在多个数据集和LLM架构上显著降低预测精度。我们的实验涵盖了TimeGPT、LLM-Time等模型,并使用了GPT-3.5、GPT-4、LLaMa和Mistral等架构,结果表明对抗性攻击导致的性能退化远超过随机噪声,且我们的攻击方法在不同LLM上均表现出广泛有效性。这些结果揭示了LLMs在时间序列预测中的关键脆弱性,强调了开发鲁棒防御机制以确保其在实际应用中可靠部署的迫切需求。