Vision-language-action (VLA) models typically inject proprioception only as a late conditioning signal, which prevents robot state from shaping instruction understanding and from influencing which visual tokens are attended throughout the policy. We introduce ThinkProprio, which converts proprioception into a sequence of text tokens in the VLM embedding space and fuses them with the task instruction at the input. This early fusion lets embodied state participate in subsequent visual reasoning and token selection, biasing computation toward action-critical evidence while suppressing redundant visual tokens. In a systematic ablation over proprioception encoding, state entry point, and action-head conditioning, we find that text tokenization is more effective than learned projectors, and that retaining roughly 15% of visual tokens can match the performance of using the full token set. Across CALVIN, LIBERO, and real-world manipulation, ThinkProprio matches or improves over strong baselines while reducing end-to-end inference latency over 50%.
翻译:视觉-语言-动作(VLA)模型通常仅将本体感觉作为后期条件信号注入,这阻碍了机器人状态对指令理解的影响,也使其无法在策略执行过程中持续影响哪些视觉标记被关注。我们提出了ThinkProprio方法,它将本体感觉转换为VLM嵌入空间中的一系列文本标记,并在输入端将其与任务指令融合。这种早期融合使得具身状态能够参与后续的视觉推理和标记选择过程,将计算偏向于对动作至关重要的证据,同时抑制冗余的视觉标记。通过对本体感觉编码方式、状态注入点以及动作头条件化机制的系统性消融实验,我们发现文本标记化方法比学习型投影器更有效,且仅保留约15%的视觉标记即可达到使用完整标记集的性能水平。在CALVIN、LIBERO数据集及真实世界操作任务上的实验表明,ThinkProprio在匹配或超越强基线方法的同时,将端到端推理延迟降低了超过50%。