Recently, driven by advancements in Multimodal Large Language Models (MLLMs), Vision Language Action Models (VLAMs) are being proposed to achieve better performance in open-vocabulary scenarios for robotic manipulation tasks. Since manipulation tasks involve direct interaction with the physical world, ensuring robustness and safety during the execution of this task is always a very critical issue. In this paper, by synthesizing current safety research on MLLMs and the specific application scenarios of the manipulation task in the physical world, we comprehensively evaluate VLAMs in the face of potential physical threats. Specifically, we propose the Physical Vulnerability Evaluating Pipeline (PVEP) that can incorporate as many visual modal physical threats as possible for evaluating the physical robustness of VLAMs. The physical threats in PVEP specifically include Out-of-Distribution, Typography-based Visual Prompts, and Adversarial Patch Attacks. By comparing the performance fluctuations of VLAMs before and after being attacked, we provide generalizable Analyses of how VLAMs respond to different physical security threats. Our project page is in this link: https://chaducheng.github.io/Manipulat-Facing-Threats/.
翻译:近年来,在多模态大语言模型(MLLMs)进步的推动下,视觉语言动作模型(VLAMs)被提出,旨在机器人操控任务的开放词汇场景中实现更优性能。由于操控任务涉及与物理世界的直接交互,确保任务执行过程中的鲁棒性与安全性始终是一个至关重要的问题。本文通过综合当前关于MLLMs的安全性研究以及操控任务在物理世界中的具体应用场景,全面评估了VLAMs在面对潜在物理威胁时的表现。具体而言,我们提出了物理脆弱性评估流程(PVEP),该流程能够整合尽可能多的视觉模态物理威胁,用于评估VLAMs的物理鲁棒性。PVEP中的物理威胁具体包括分布外数据、基于排印学的视觉提示以及对抗性补丁攻击。通过比较VLAMs在遭受攻击前后的性能波动,我们提供了关于VLAMs如何响应不同物理安全威胁的普适性分析。我们的项目页面链接如下:https://chaducheng.github.io/Manipulat-Facing-Threats/。