Large language models (LLMs) represent significant breakthroughs in artificial intelligence and hold considerable potential for applications within smart grids. However, as demonstrated in previous literature, AI technologies are susceptible to various types of attacks. It is crucial to investigate and evaluate the risks associated with LLMs before deploying them in critical infrastructure like smart grids. In this paper, we systematically evaluated the risks of LLMs and identified two major types of attacks relevant to potential smart grid LLM applications, presenting the corresponding threat models. We also validated these attacks using popular LLMs and real smart grid data. Our validation demonstrates that attackers are capable of injecting bad data and retrieving domain knowledge from LLMs employed in different smart grid applications.
翻译:大语言模型(LLMs)代表了人工智能领域的重大突破,在智能电网应用中具有巨大潜力。然而,正如先前文献所表明的,人工智能技术容易受到各类攻击。在将LLMs部署到智能电网等关键基础设施之前,研究和评估其相关风险至关重要。本文系统评估了LLMs的风险,识别了与潜在智能电网LLM应用相关的两类主要攻击,并提出了相应的威胁模型。我们还使用主流LLMs和真实智能电网数据验证了这些攻击。我们的验证表明,攻击者能够向不同智能电网应用中的LLMs注入不良数据并从中获取领域知识。