Recent advancements in large language models (LLMs) have revolutionized code intelligence by improving programming productivity and alleviating challenges faced by software developers. To further improve the performance of LLMs on specific code intelligence tasks and reduce training costs, researchers reveal a new capability of LLMs: in-context learning (ICL). ICL allows LLMs to learn from a few demonstrations within a specific context, achieving impressive results without parameter updating. However, the rise of ICL introduces new security vulnerabilities in the code intelligence field. In this paper, we explore a novel security scenario based on the ICL paradigm, where attackers act as third-party ICL agencies and provide users with bad ICL content to mislead LLMs outputs in code intelligence tasks. Our study demonstrates the feasibility and risks of such a scenario, revealing how attackers can leverage malicious demonstrations to construct bad ICL content and induce LLMs to produce incorrect outputs, posing significant threats to system security. We propose a novel method to construct bad ICL content called DICE, which is composed of two stages: Demonstration Selection and Bad ICL Construction, constructing targeted bad ICL content based on the user query and transferable across different query inputs. Ultimately, our findings emphasize the critical importance of securing ICL mechanisms to protect code intelligence systems from adversarial manipulation.
翻译:近年来,大型语言模型(LLM)的进展通过提升编程效率、缓解软件开发人员面临的挑战,彻底改变了代码智能领域。为进一步提升LLM在特定代码智能任务上的性能并降低训练成本,研究者揭示了LLM的一项新能力:上下文学习(ICL)。ICL允许LLM在特定上下文中从少量演示示例中学习,无需更新参数即可取得令人印象深刻的效果。然而,ICL的兴起也为代码智能领域引入了新的安全漏洞。本文基于ICL范式探索了一种新型安全场景:攻击者作为第三方ICL服务商,向用户提供恶意ICL内容以误导LLM在代码智能任务中的输出。我们的研究证明了此类场景的可行性与风险,揭示了攻击者如何利用恶意演示构建有害ICL内容,诱导LLM产生错误输出,从而对系统安全构成重大威胁。我们提出了一种名为DICE的新型恶意ICL内容构建方法,该方法包含演示选择与恶意ICL构建两个阶段,能够基于用户查询构建具有针对性的恶意ICL内容,并实现跨不同查询输入的迁移。最终,我们的研究结果强调了保障ICL机制安全的至关重要性,以保护代码智能系统免受对抗性操纵。