Mixture of Experts (MoE) architectures have gained popularity for reducing computational costs in deep neural networks by activating only a subset of parameters during inference. While this efficiency makes MoE attractive for vision tasks, the patch-based processing in vision models introduces new methods for adversaries to perform backdoor attacks. In this work, we investigate the vulnerability of vision MoE models for image classification, specifically the patch-based MoE (pMoE) models and MoE-based vision transformers, against backdoor attacks. We propose a novel routing-aware trigger application method BadPatches, which is designed for patch-based processing in vision MoE models. BadPatches applies triggers on image patches rather than on the entire image. We show that BadPatches achieves high attack success rates (ASRs) with lower poisoning rates than routing-agnostic triggers and is successful at poisoning rates as low as 0.01% with an ASR above 80% on pMoE. Moreover, BadPatches is still effective when an adversary does not have complete knowledge of the patch routing configuration of the considered models. Next, we explore how trigger design affects pMoE patch routing. Finally, we investigate fine-pruning as a defense. Results show that only the fine-tuning stage of fine-pruning removes the backdoor from the model.
翻译:专家混合(MoE)架构通过仅在推理过程中激活部分参数来降低深度神经网络的计算成本,因而日益受到欢迎。尽管这种高效性使MoE在视觉任务中颇具吸引力,但视觉模型中基于图像块的处理方式为攻击者实施后门攻击提供了新途径。本研究探讨了面向图像分类的视觉MoE模型——特别是基于图像块的MoE(pMoE)模型与基于MoE的视觉Transformer——在后门攻击下的脆弱性。我们提出了一种新颖的路由感知触发器注入方法BadPatches,该方法专为视觉MoE模型中基于图像块的处理机制设计。BadPatches将触发器施加于图像块而非整张图像。实验表明,相较于路由无关的触发器,BadPatches能以更低的投毒率实现高攻击成功率(ASR),在pMoE模型上仅需0.01%的投毒率即可达到80%以上的ASR。此外,即使攻击者未能完全掌握目标模型的图像块路由配置,BadPatches仍能保持攻击有效性。我们进一步探究了触发器设计对pMoE图像块路由的影响。最后,我们研究了精细剪枝作为防御手段的效果。结果显示,仅通过精细剪枝中的微调阶段即可消除模型中的后门。