The rising use of Large Language Models (LLMs) to create and disseminate malware poses a significant cybersecurity challenge due to their ability to generate and distribute attacks with ease. A single prompt can initiate a wide array of malicious activities. This paper addresses this critical issue through a multifaceted approach. First, we provide a comprehensive overview of LLMs and their role in malware detection from diverse sources. We examine five specific applications of LLMs: Malware honeypots, identification of text-based threats, code analysis for detecting malicious intent, trend analysis of malware, and detection of non-standard disguised malware. Our review includes a detailed analysis of the existing literature and establishes guiding principles for the secure use of LLMs. We also introduce a classification scheme to categorize the relevant literature. Second, we propose performance metrics to assess the effectiveness of LLMs in these contexts. Third, we present a risk mitigation framework designed to prevent malware by leveraging LLMs. Finally, we evaluate the performance of our proposed risk mitigation strategies against various factors and demonstrate their effectiveness in countering LLM-enabled malware. The paper concludes by suggesting future advancements and areas requiring deeper exploration in this fascinating field of artificial intelligence.
翻译:大型语言模型(LLM)因其易于生成和传播攻击的能力,在恶意软件创建与传播中的日益广泛应用带来了严峻的网络安全挑战。单一提示即可引发多种恶意活动。本文通过多维度方法应对这一关键问题。首先,我们全面综述了LLM及其在不同来源恶意软件检测中的作用,重点考察了LLM在五个具体领域的应用:恶意软件蜜罐、文本型威胁识别、基于代码分析的恶意意图检测、恶意软件趋势分析以及非标准伪装恶意软件的检测。本综述包含对现有文献的详细分析,并建立了安全使用LLM的指导原则。我们还提出了对相关文献进行分类的体系。其次,我们设计了性能评估指标以衡量LLM在上述场景中的有效性。第三,我们提出了一个利用LLM预防恶意软件的风险缓解框架。最后,我们评估了所提风险缓解策略在不同因素下的性能,并验证了其在对抗LLM赋能恶意软件方面的有效性。本文最后对该人工智能前沿领域的未来发展方向与待深入探索的领域提出了建议。