In the rapidly evolving field of machine learning, adversarial attacks present a significant challenge to model robustness and security. Decision-based attacks, which only require feedback on the decision of a model rather than detailed probabilities or scores, are particularly insidious and difficult to defend against. This work introduces L-AutoDA (Large Language Model-based Automated Decision-based Adversarial Attacks), a novel approach leveraging the generative capabilities of Large Language Models (LLMs) to automate the design of these attacks. By iteratively interacting with LLMs in an evolutionary framework, L-AutoDA automatically designs competitive attack algorithms efficiently without much human effort. We demonstrate the efficacy of L-AutoDA on CIFAR-10 dataset, showing significant improvements over baseline methods in both success rate and computational efficiency. Our findings underscore the potential of language models as tools for adversarial attack generation and highlight new avenues for the development of robust AI systems.
翻译:在快速发展的机器学习领域中,对抗攻击对模型的鲁棒性和安全性构成了重大挑战。决策式攻击仅需模型的决策反馈,而无需详细的概率或分数信息,因此尤其隐蔽且难以防御。本文提出L-AutoDA(基于大语言模型的自动化决策式对抗攻击),这是一种利用大语言模型(LLMs)的生成能力来自动设计此类攻击的新方法。通过在演化框架中与大语言模型进行迭代交互,L-AutoDA能够高效地自动设计出具有竞争力的攻击算法,且无需过多人工参与。我们在CIFAR-10数据集上验证了L-AutoDA的有效性,结果表明其在攻击成功率和计算效率方面均显著优于基线方法。我们的研究结果凸显了语言模型作为对抗攻击生成工具的潜力,并为开发鲁棒人工智能系统指明了新的方向。