With increasingly sophisticated large language models (LLMs), the potential for abuse rises drastically. As a submission to the Swiss AI Safety Prize, we present a novel type of metamorphic malware leveraging LLMs for two key processes. First, LLMs are used for automatic code rewriting to evade signature-based detection by antimalware programs. The malware then spreads its copies via email by utilizing an LLM to socially engineer email replies to encourage recipients to execute the attached malware. Our submission includes a functional minimal prototype, highlighting the risks that LLMs pose for cybersecurity and underscoring the need for further research into intelligent malware.
翻译:随着大型语言模型(LLM)日益复杂,其被滥用的可能性急剧上升。作为瑞士人工智能安全奖的参赛作品,我们提出了一种新型的变形恶意软件,它利用LLM完成两个关键过程。首先,利用LLM进行自动代码重写,以规避反恶意软件程序基于特征的检测。随后,该恶意软件通过电子邮件传播其副本,利用LLM进行社交工程,生成鼓励收件人执行所附恶意软件的邮件回复。我们的参赛作品包含一个功能性的最小化原型,突显了LLM对网络安全构成的风险,并强调了对智能恶意软件进行进一步研究的必要性。