Vision Language Action (VLA) models are widely used in Embodied AI, enabling robots to interpret and execute language instructions. However, their robustness to natural language variability in real-world scenarios has not been thoroughly investigated. In this work, we present a novel systematic study of the robustness of state-of-the-art VLA models under linguistic perturbations. Specifically, we evaluate model performance under two types of instruction noise: (1) human-generated paraphrasing and (2) the addition of irrelevant context. We further categorize irrelevant contexts into two groups according to their length and their semantic and lexical proximity to robot commands. In this study, we observe consistent performance degradation as context size expands. We also demonstrate that the model can exhibit relative robustness to random context, with a performance drop within 10%, while semantically and lexically similar context of the same length can trigger a quality decline of around 50%. Human paraphrases of instructions lead to a drop of nearly 20%. To mitigate this, we propose an LLM-based filtering framework that extracts core commands from noisy inputs. Incorporating our filtering step allows models to recover up to 98.5% of their original performance under noisy conditions.
翻译:视觉语言动作(VLA)模型在具身人工智能领域被广泛应用,使机器人能够理解和执行语言指令。然而,这些模型在现实场景中对自然语言变化的鲁棒性尚未得到深入研究。本文提出了一项新颖的系统性研究,探讨了最先进的VLA模型在语言扰动下的鲁棒性。具体而言,我们评估了模型在两种指令噪声下的性能:(1)人工生成的释义指令,以及(2)添加无关语境。我们进一步根据无关语境的长度及其与机器人指令的语义和词汇相似性,将其分为两类。在本研究中,我们观察到随着语境规模的扩大,模型性能持续下降。我们还证明,模型对随机语境表现出相对鲁棒性,性能下降在10%以内,而相同长度下语义和词汇相似的语境可能导致性能下降约50%。指令的人工释义则导致近20%的性能下降。为缓解此问题,我们提出了一种基于大型语言模型的过滤框架,用于从含噪声的输入中提取核心指令。引入我们的过滤步骤后,模型在噪声条件下能够恢复高达98.5%的原始性能。