Building on the unprecedented capabilities of large language models for command understanding and zero-shot recognition of multi-modal vision-language transformers, visual language navigation (VLN) has emerged as an effective way to address multiple fundamental challenges toward a natural language interface to robot navigation. However, such vision-language models are inherently vulnerable due to the lack of semantic meaning of the underlying embedding space. Using a recently developed gradient based optimization procedure, we demonstrate that images can be modified imperceptibly to match the representation of totally different images and unrelated texts for a vision-language model. Building on this, we develop algorithms that can adversarially modify a minimal number of images so that the robot will follow a route of choice for commands that require a number of landmarks. We demonstrate that experimentally using a recently proposed VLN system; for a given navigation command, a robot can be made to follow drastically different routes. We also develop an efficient algorithm to detect such malicious modifications reliably based on the fact that the adversarially modified images have much higher sensitivity to added Gaussian noise than the original images.
翻译:基于大型语言模型在指令理解与多模态视觉语言Transformer零样本识别方面前所未有的能力,视觉语言导航(VLN)已成为解决机器人导航自然语言接口多项基础挑战的有效途径。然而,此类视觉语言模型因其底层嵌入空间缺乏语义意义而存在固有脆弱性。通过采用近期开发的基于梯度的优化方法,我们证明可以对图像进行难以察觉的修改,使其在视觉语言模型中匹配完全不同图像及无关文本的表征。在此基础上,我们开发了能够对最少数量图像进行对抗性修改的算法,使得机器人在需要经过多个地标的导航指令下遵循预设路线。我们使用近期提出的VLN系统进行实验验证:对于给定导航指令,可使机器人遵循截然不同的路线。同时,我们基于对抗性修改图像对高斯噪声添加的敏感度显著高于原始图像的特性,开发了一种高效算法以可靠检测此类恶意修改。