Learning structured task representations from human demonstrations is essential for understanding long-horizon manipulation behaviors, particularly in bimanual settings where action ordering, object involvement, and interaction geometry can vary significantly. A key challenge lies in jointly capturing the discrete semantic structure of tasks and the temporal evolution of object-centric geometric relations in a form that supports reasoning over task progression. In this work, we introduce a semantic-geometric task graph-representation that encodes object identities, inter-object relations, and their temporal geometric evolution from human demonstrations. Building on this formulation, we propose a learning framework that combines a Message Passing Neural Network (MPNN) encoder with a Transformer-based decoder, decoupling scene representation learning from action-conditioned reasoning about task progression. The encoder operates solely on temporal scene graphs to learn structured representations, while the decoder conditions on action-context to predict future action sequences, associated objects, and object motions over extended time horizons. Through extensive evaluation on human demonstration datasets, we show that semantic-geometric task graph-representations are particularly beneficial for tasks with high action and object variability, where simpler sequence-based models struggle to capture task progression. Finally, we demonstrate that task graph representations can be transferred to a physical bimanual robot and used for online action selection, highlighting their potential as reusable task abstractions for downstream decision-making in manipulation systems.
翻译:从人类演示中学习结构化任务表示对于理解长时程操作行为至关重要,尤其是在双手操作场景中,其中动作顺序、物体参与和交互几何关系可能存在显著变化。一个核心挑战在于如何以支持任务进展推理的形式,联合捕捉任务的离散语义结构和以物体为中心的几何关系的时间演化。本文提出一种语义-几何任务图表示,该表示从人类演示中编码物体身份、物体间关系及其时间几何演化。基于此框架,我们提出一种学习框架,将消息传递神经网络编码器与基于Transformer的解码器相结合,从而将场景表示学习与基于动作条件的任务进展推理解耦。编码器仅使用时序场景图来学习结构化表示,而解码器则根据动作上下文预测未来动作序列、相关物体及长时程内的物体运动。通过对人类演示数据集的广泛评估,我们表明语义-几何任务图表示对于动作和物体高度可变的任务尤为有益,而基于简单序列的模型难以捕捉此类任务进展。最后,我们证明任务图表示可迁移至实体双手机器人并用于在线动作选择,凸显其作为可复用任务抽象在下游操作系统决策中的潜力。