Recent vision-language-action (VLA) models can generate plausible end-effector motions, yet they often fail in long-horizon, contact-rich tasks because the underlying hand-object interaction (HOI) structure is not explicitly represented. An embodiment-agnostic interaction representation that captures this structure would make manipulation behaviors easier to validate and transfer across robots. We propose FlowHOI, a two-stage flow-matching framework that generates semantically grounded, temporally coherent HOI sequences, comprising hand poses, object poses, and hand-object contact states, conditioned on an egocentric observation, a language instruction, and a 3D Gaussian splatting (3DGS) scene reconstruction. We decouple geometry-centric grasping from semantics-centric manipulation, conditioning the latter on compact 3D scene tokens and employing a motion-text alignment loss to semantically ground the generated interactions in both the physical scene layout and the language instruction. To address the scarcity of high-fidelity HOI supervision, we introduce a reconstruction pipeline that recovers aligned hand-object trajectories and meshes from large-scale egocentric videos, yielding an HOI prior for robust generation. Across the GRAB and HOT3D benchmarks, FlowHOI achieves the highest action recognition accuracy and a 1.7$\times$ higher physics simulation success rate than the strongest diffusion-based baseline, while delivering a 40$\times$ inference speedup. We further demonstrate real-robot execution on four dexterous manipulation tasks, illustrating the feasibility of retargeting generated HOI representations to real-robot execution pipelines.
翻译:近期视觉-语言-动作(VLA)模型能够生成合理的末端执行器运动,但在长时程、接触密集的任务中常常失败,因为其底层的手-物交互(HOI)结构未被显式表征。一种能捕捉此结构且与具体形态无关的交互表征,将使操作行为更易于验证并在不同机器人间迁移。我们提出FlowHOI,一个两阶段的流匹配框架,它以前置视角观测、语言指令和3D高斯溅射(3DGS)场景重建为条件,生成语义接地、时序连贯的HOI序列,包括手部姿态、物体姿态以及手-物接触状态。我们将以几何为中心的抓取与以语义为中心的操作解耦,使后者以紧凑的3D场景标记为条件,并采用运动-文本对齐损失,使生成的交互在物理场景布局和语言指令两方面均实现语义接地。针对高保真HOI监督数据稀缺的问题,我们引入一个重建流程,从大规模前置视角视频中恢复对齐的手-物轨迹与网格,从而为鲁棒生成提供一个HOI先验。在GRAB和HOT3D基准测试中,FlowHOI实现了最高的动作识别准确率,以及比最强的基于扩散的基线高1.7$\times$的物理仿真成功率,同时推理速度提升了40$\times$。我们进一步在四个灵巧操作任务上展示了真实机器人执行,说明了将生成的HOI表征重定向至真实机器人执行流程的可行性。