Large language models (LLMs) have revolutionized software development practices, yet concerns about their safety have arisen, particularly regarding hidden backdoors, aka trojans. Backdoor attacks involve the insertion of triggers into training data, allowing attackers to manipulate the behavior of the model maliciously. In this paper, we focus on analyzing the model parameters to detect potential backdoor signals in code models. Specifically, we examine attention weights and biases, and context embeddings of the clean and poisoned CodeBERT and CodeT5 models. Our results suggest noticeable patterns in context embeddings of poisoned samples for both the poisoned models; however, attention weights and biases do not show any significant differences. This work contributes to ongoing efforts in white-box detection of backdoor signals in LLMs of code through the analysis of parameters and embeddings.
翻译:大语言模型(LLMs)已在软件开发实践中引发革命性变革,但其安全性问题日益凸显,尤其是隐藏后门(即木马)攻击。后门攻击通过在训练数据中植入触发器,使攻击者能够恶意操控模型行为。本文聚焦于分析模型参数以检测代码模型中的潜在后门信号,具体包括:检查干净模型与受污染模型(CodeBERT和CodeT5)的注意力权重、偏置项以及上下文嵌入向量。实验结果表明,受污染模型在中毒样本的上下文嵌入向量中呈现出显著模式,而注意力权重与偏置项未表现出明显差异。本研究通过分析参数与嵌入向量,为代码大语言模型后门信号的白盒检测提供了持续贡献。