Robotics affordances, providing information about what actions can be taken in a given situation, can aid robotics manipulation. However, learning about affordances requires expensive large annotated datasets of interactions or demonstrations. In this work, we show active learning can mitigate this problem and propose the use of uncertainty to drive an interactive affordance discovery process. We show that our method enables the efficient discovery of visual affordances for several action primitives, such as grasping, stacking objects, or opening drawers, strongly improving data efficiency and allowing us to learn grasping affordances on a real-world setup with an xArm 6 robot arm in a small number of trials.
翻译:机器人可供性(提供在特定情境下可执行动作的信息)能够辅助机器人操作。然而,学习可供性需要耗费大量资源的大规模交互或演示标注数据集。在本研究中,我们证明主动学习可以缓解这一问题,并提出利用不确定性来驱动交互式可供性发现过程。实验表明,我们的方法能够高效发现抓取、堆叠物体或打开抽屉等多种动作基元的视觉可供性,显著提升了数据效率,并使我们能够在真实环境下使用xArm 6机械臂通过少量试验即可学习抓取可供性。