Good pre-trained visual representations could enable robots to learn visuomotor policy efficiently. Still, existing representations take a one-size-fits-all-tasks approach that comes with two important drawbacks: (1) Being completely task-agnostic, these representations cannot effectively ignore any task-irrelevant information in the scene, and (2) They often lack the representational capacity to handle unconstrained/complex real-world scenes. Instead, we propose to train a large combinatorial family of representations organized by scene entities: objects and object parts. This hierarchical object decomposition for task-oriented representations (HODOR) permits selectively assembling different representations specific to each task while scaling in representational capacity with the complexity of the scene and the task. In our experiments, we find that HODOR outperforms prior pre-trained representations, both scene vector representations and object-centric representations, for sample-efficient imitation learning across 5 simulated and 5 real-world manipulation tasks. We further find that the invariances captured in HODOR are inherited into downstream policies, which can robustly generalize to out-of-distribution test conditions, permitting zero-shot skill chaining. Appendix, code, and videos: https://sites.google.com/view/hodor-corl24.
翻译:良好的预训练视觉表征能够帮助机器人高效学习视觉运动策略。然而,现有表征采用“一刀切”的方法,存在两个重要缺陷:(1) 这些表征完全与任务无关,无法有效忽略场景中任何与任务无关的信息;(2) 它们通常缺乏足够的表征能力来处理无约束/复杂的真实世界场景。为此,我们提出训练一个按场景实体(物体及物体部件)组织的大规模组合式表征族。这种面向任务的层次化物体分解方法(HODOR)允许针对每个任务选择性地组装不同的表征,同时其表征能力能够随场景和任务的复杂性而扩展。在我们的实验中,我们发现 HODOR 在样本高效的模仿学习方面优于先前的预训练表征(包括场景向量表征和以物体为中心的表征),在5个模拟任务和5个真实世界操作任务上均得到验证。我们进一步发现,HODOR 所捕获的不变性能够继承到下游策略中,使策略能够鲁棒地泛化到分布外测试条件,从而实现零样本技能链式组合。附录、代码和视频:https://sites.google.com/view/hodor-corl24。