Code Language Models (CLMs) have achieved tremendous progress in source code understanding and generation, leading to a significant increase in research interests focused on applying CLMs to real-world software engineering tasks in recent years. However, in realistic scenarios, CLMs are exposed to potential malicious adversaries, bringing risks to the confidentiality, integrity, and availability of CLM systems. Despite these risks, a comprehensive analysis of the security vulnerabilities of CLMs in the extremely adversarial environment has been lacking. To close this research gap, we categorize existing attack techniques into three types based on the CIA triad: poisoning attacks (integrity \& availability infringement), evasion attacks (integrity infringement), and privacy attacks (confidentiality infringement). We have collected so far the most comprehensive (79) papers related to adversarial machine learning for CLM from the research fields of artificial intelligence, computer security, and software engineering. Our analysis covers each type of risk, examining threat model categorization, attack techniques, and countermeasures, while also introducing novel perspectives on eXplainable AI (XAI) and exploring the interconnections between different risks. Finally, we identify current challenges and future research opportunities. This study aims to provide a comprehensive roadmap for both researchers and practitioners and pave the way towards more reliable CLMs for practical applications.
翻译:代码语言模型(CLMs)在源代码理解与生成领域取得了巨大进展,近年来推动着将CLMs应用于实际软件工程任务的研究兴趣显著增长。然而在现实场景中,CLMs面临着潜在的恶意对抗威胁,给CLM系统的机密性、完整性和可用性带来风险。尽管存在这些风险,目前仍缺乏对极端对抗环境下CLMs安全漏洞的系统性分析。为填补这一研究空白,我们依据CIA三要素将现有攻击技术分为三类:投毒攻击(破坏完整性及可用性)、逃避攻击(破坏完整性)和隐私攻击(破坏机密性)。我们从人工智能、计算机安全和软件工程研究领域收集了迄今最全面的79篇CLM对抗性机器学习相关文献。我们的分析涵盖各类风险,深入考察威胁模型分类、攻击技术与防御措施,同时引入可解释人工智能(XAI)的新视角,并探讨不同风险间的内在关联。最后,我们指出现有挑战与未来研究方向。本研究旨在为学术界和工业界提供完整的技术路线图,为构建更可靠的实际应用型CLMs奠定基础。