Learning the intents of an agent, defined by its goals or motion style, is often extremely challenging from just a few examples. We refer to this problem as task concept learning and present our approach, Few-Shot Task Learning through Inverse Generative Modeling (FTL-IGM), which learns new task concepts by leveraging invertible neural generative models. The core idea is to pretrain a generative model on a set of basic concepts and their demonstrations. Then, given a few demonstrations of a new concept (such as a new goal or a new action), our method learns the underlying concepts through backpropagation without updating the model weights, thanks to the invertibility of the generative model. We evaluate our method in five domains -- object rearrangement, goal-oriented navigation, motion caption of human actions, autonomous driving, and real-world table-top manipulation. Our experimental results demonstrate that via the pretrained generative model, we successfully learn novel concepts and generate agent plans or motion corresponding to these concepts in (1) unseen environments and (2) in composition with training concepts.
翻译:仅从少量示例中学习智能体的意图(由其目标或运动风格定义)通常极具挑战性。我们将此问题称为任务概念学习,并提出我们的方法——基于逆生成建模的少样本任务学习(FTL-IGM),该方法通过利用可逆神经生成模型来学习新的任务概念。其核心思想是在一组基本概念及其演示上预训练一个生成模型。然后,给定一个新概念(例如一个新目标或一个新动作)的少量演示,我们的方法借助生成模型的可逆性,通过反向传播学习底层概念,而无需更新模型权重。我们在五个领域评估了我们的方法——物体重排、目标导向导航、人类动作的运动描述、自动驾驶以及真实世界桌面操作。我们的实验结果表明,通过预训练的生成模型,我们成功地学习了新概念,并在(1)未见过的环境以及(2)与训练概念组合的情况下,生成了符合这些概念的智能体规划或运动。