Humans efficiently generalize from limited demonstrations, but robots still struggle to transfer learned knowledge to complex, unseen tasks with longer horizons and increased complexity. We propose the first known method enabling robots to autonomously invent relational concepts directly from small sets of unannotated, unsegmented demonstrations. The learned symbolic concepts are grounded into logic-based world models, facilitating efficient zero-shot generalization to significantly more complex tasks. Empirical results demonstrate that our approach achieves performance comparable to hand-crafted models, successfully scaling execution horizons and handling up to 18 times more objects than seen in training, providing the first autonomous framework for learning transferable symbolic abstractions from raw robot trajectories.
翻译:人类能够从有限的演示中高效地泛化,但机器人仍难以将学到的知识迁移到具有更长视野和更高复杂度的未见复杂任务中。我们提出了首个已知的方法,使机器人能够直接从少量未标注、未分割的演示中自主发明关系概念。学习到的符号概念被融入基于逻辑的世界模型,从而促进对显著更复杂任务的高效零样本泛化。实证结果表明,我们的方法实现了与手工构建模型相当的性能,成功扩展了执行视野,并能处理训练中未见过的多达18倍数量的物体,为从原始机器人轨迹中学习可迁移的符号抽象提供了首个自主框架。