Vision-Language-Action (VLA) models leverage pretrained Vision-Language Models (VLMs) as backbones to map images and instructions to actions, demonstrating remarkable potential for generalizable robotic manipulation. To enhance performance, existing methods often incorporate extra observation cues (e.g., depth maps, point clouds) or auxiliary modules (e.g., object detectors, encoders) to enable more precise and reliable task execution, yet these typically require costly data collection and additional training. Inspired by the finding that Feed-Forward Network (FFN) in language models can act as "key-value memory", we propose Uncertainty-aware Observation Reinjection (UAOR), an effective, training-free and plug-and-play module for VLA models. Specifically, when the current language model layer exhibits high uncertainty, measured by Action Entropy, it reinjects key observation information into the next layer's Feed-Forward Network (FFN) through attention retrieval. This mechanism helps VLAs better attend to observations during inference, enabling more confident and faithful action generation. Comprehensive experiments show that our method consistently improves diverse VLA models across simulation and real-world tasks with minimal overhead. Notably, UAOR eliminates the need for additional observation cues or modules, making it a versatile and practical plug-in for existing VLA pipelines. The project page is at https://uaor.jiabingyang.cn.
翻译:视觉-语言-动作(VLA)模型利用预训练的视觉-语言模型(VLM)作为骨干网络,将图像与指令映射为动作,在可泛化的机器人操作任务中展现出显著潜力。为提升性能,现有方法通常引入额外观测线索(如深度图、点云)或辅助模块(如物体检测器、编码器)以实现更精确可靠的任务执行,但这些方法通常需要昂贵的数据收集和额外训练。受语言模型中前馈网络(FFN)可视为“键值记忆”这一发现的启发,我们提出不确定性感知观测重注入(UAOR),一种面向VLA模型的高效、免训练即插即用模块。具体而言,当当前语言模型层(通过动作熵度量)表现出较高不确定性时,该模块通过注意力检索机制将关键观测信息重注入至下一层的前馈网络(FFN)中。该机制有助于VLA模型在推理过程中更好地关注观测信息,从而生成更自信且可靠的动作。综合实验表明,我们的方法能以极低开销持续提升多种VLA模型在仿真与真实世界任务中的性能。值得注意的是,UAOR无需额外观测线索或辅助模块,使其成为现有VLA流程中通用且实用的插件。项目页面位于 https://uaor.jiabingyang.cn。