Vision-Language-Action (VLA) models map multimodal perception and language instructions to executable robot actions, making them particularly vulnerable to behavioral backdoor manipulation: a hidden trigger introduced during training can induce unintended physical actions while nominal task performance remains intact. Prior work on VLA backdoors primarily studies untargeted attacks or task-level hijacking, leaving fine-grained control over individual actions largely unexplored. In this work, we present DropVLA, an action-level backdoor attack that forces a reusable action primitive (e.g., open_gripper) to execute at attacker-chosen decision points under a realistic pipeline-black-box setting with limited data-poisoning access, using a window-consistent relabeling scheme for chunked fine-tuning. On OpenVLA-7B evaluated with LIBERO, vision-only poisoning achieves 98.67%-99.83% attack success rate (ASR) with only 0.31% poisoned episodes while preserving 98.50%-99.17% clean-task retention, and successfully triggers the targeted action within 25 control steps at 500 Hz (0.05 s). Text-only triggers are unstable at low poisoning budgets, and combining text with vision provides no consistent ASR improvement over vision-only attacks. The backdoor remains robust to moderate trigger variations and transfers across evaluation suites (96.27%, 99.09%), whereas text-only largely fails (0.72%). We further validate physical-world feasibility on a 7-DoF Franka arm with pi0-fast, demonstrating non-trivial attack efficacy under camera-relative motion that induces image-plane trigger drift. These results reveal that VLA models can be covertly steered at the granularity of safety-critical actions with minimal poisoning and without observable degradation of nominal performance.
翻译:视觉-语言-动作模型将多模态感知和语言指令映射为可执行的机器人动作,这使其特别容易受到行为后门操控:训练过程中引入的隐蔽触发器可在名义任务性能保持完好的情况下,诱发非预期的物理动作。先前关于VLA后门的研究主要集中于无目标攻击或任务级劫持,对个体动作的细粒度控制则尚未深入探索。本文提出DropVLA,一种动作级后门攻击方法,通过在分块微调中采用窗口一致性重标注方案,在数据投毒访问受限的现实管道黑盒设置下,强制可复用的动作基元在攻击者选定的决策点执行。在基于LIBERO评估的OpenVLA-7B模型上,仅视觉投毒以0.31%的污染片段比例实现了98.67%-99.83%的攻击成功率,同时保持98.50%-99.17%的洁净任务保留率,并能在500Hz控制频率下于25个控制步内成功触发目标动作。纯文本触发器在低投毒预算下表现不稳定,而视觉与文本结合的触发方式相比纯视觉攻击未能带来一致的ASR提升。该后门对中等程度的触发器变化保持鲁棒,并能跨评估套件迁移,而纯文本触发器则基本失效。我们进一步在7自由度Franka机械臂上使用pi0-fast验证了物理世界的可行性,证明了在引发图像平面触发器漂移的相机相对运动下,攻击仍具有显著效力。这些结果表明,VLA模型可在安全关键动作的粒度上被隐蔽操控,且只需极少的投毒数据,同时不会导致名义性能的明显下降。