Vision-Language-Action (VLA) models excel in static manipulation but struggle in dynamic environments with moving targets. This performance gap primarily stems from a scarcity of dynamic manipulation datasets and the reliance of mainstream VLAs on single-frame observations, restricting their spatiotemporal reasoning capabilities. To address this, we introduce DOMINO, a large-scale dataset and benchmark for generalizable dynamic manipulation, featuring 35 tasks with hierarchical complexities, over 110K expert trajectories, and a multi-dimensional evaluation suite. Through comprehensive experiments, we systematically evaluate existing VLAs on dynamic tasks, explore effective training strategies for dynamic awareness, and validate the generalizability of dynamic data. Furthermore, we propose PUMA, a dynamics-aware VLA architecture. By integrating scene-centric historical optical flow and specialized world queries to implicitly forecast object-centric future states, PUMA couples history-aware perception with short-horizon prediction. Results demonstrate that PUMA achieves state-of-the-art performance, yielding a 6.3% absolute improvement in success rate over baselines. Moreover, we show that training on dynamic data fosters robust spatiotemporal representations that transfer to static tasks. All code and data are available at https://github.com/H-EmbodVis/DOMINO.
翻译:视觉-语言-动作(VLA)模型在静态操作任务中表现出色,但在包含移动目标的动态环境中表现欠佳。这一性能差距主要源于动态操作数据集的稀缺,以及主流VLA模型对单帧观测的依赖,这限制了其时空推理能力。为解决此问题,我们提出了DOMINO,一个用于可泛化动态操作的大规模数据集与基准测试平台。它包含35个具有层次化复杂度的任务、超过11万条专家演示轨迹以及一个多维度的评估套件。通过全面的实验,我们系统评估了现有VLA模型在动态任务上的表现,探索了提升动态感知能力的有效训练策略,并验证了动态数据的泛化能力。此外,我们提出了PUMA,一种具备动态感知能力的VLA架构。通过整合以场景为中心的历史光流信息,并利用专门的世界查询来隐式预测以物体为中心的未来状态,PUMA将历史感知与短时域预测相结合。实验结果表明,PUMA取得了最先进的性能,其成功率相较于基线模型实现了6.3%的绝对提升。此外,我们还证明,在动态数据上进行训练能够形成稳健的时空表征,这些表征可以迁移到静态任务中。所有代码与数据均公开于 https://github.com/H-EmbodVis/DOMINO。