An exponential growth of Machine Learning and its Generative AI applications brings with it significant security challenges, often referred to as Adversarial Machine Learning (AML). In this paper, we conducted two comprehensive studies to explore the perspectives of industry professionals and students on different AML vulnerabilities and their educational strategies. In our first study, we conducted an online survey with professionals revealing a notable correlation between cybersecurity education and concern for AML threats. For our second study, we developed two CTF challenges that implement Natural Language Processing and Generative AI concepts and demonstrate a poisoning attack on the training data set. The effectiveness of these challenges was evaluated by surveying undergraduate and graduate students at Carnegie Mellon University, finding that a CTF-based approach effectively engages interest in AML threats. Based on the responses of the participants in our research, we provide detailed recommendations emphasizing the critical need for integrated security education within the ML curriculum.
翻译:机器学习的指数级增长及其生成式人工智能应用带来了重大的安全挑战,通常被称为对抗性机器学习。在本文中,我们进行了两项综合性研究,以探讨业界专业人士和学生对不同AML漏洞及其教育策略的看法。在第一项研究中,我们开展了一项针对专业人士的在线调查,揭示了网络安全教育与对AML威胁的关注度之间存在显著相关性。在第二项研究中,我们开发了两个夺旗挑战,它们实现了自然语言处理和生成式人工智能概念,并演示了对训练数据集的投毒攻击。我们通过对卡内基·梅隆大学的本科生和研究生进行问卷调查来评估这些挑战的有效性,发现基于夺旗挑战的方法能有效激发对AML威胁的兴趣。基于研究参与者的反馈,我们提供了详细的建议,强调在机器学习课程中整合安全教育的迫切需求。