Generating context-adaptive manipulation and grasping actions is a challenging problem in robotics. Classical planning and control algorithms tend to be inflexible with regard to parameterization by external variables such as object shapes. In contrast, Learning from Demonstration (LfD) approaches, due to their nature as function approximators, allow for introducing external variables to modulate policies in response to the environment. In this paper, we utilize this property by introducing an LfD approach to acquire context-dependent grasping and manipulation strategies. We treat the problem as a kernel-based function approximation, where the kernel inputs include generic context variables describing task-dependent parameters such as the object shape. We build on existing work on policy fusion with uncertainty quantification to propose a state-dependent approach that automatically returns to demonstrations, avoiding unpredictable behavior while smoothly adapting to context changes. The approach is evaluated against the LASA handwriting dataset and on a real 7-DoF robot in two scenarios: adaptation to slippage while grasping and manipulating a deformable food item.
翻译:生成适应上下文的操作与抓取动作是机器人学中的一个难题。经典规划与控制算法在应对外部变量(如物体形状)的参数化调整时往往缺乏灵活性。相比之下,基于演示的学习方法因其函数逼近器的本质,允许引入外部变量来根据环境调整策略。本文利用这一特性,提出一种基于演示的学习方法以获取上下文相关的抓取与操作策略。我们将该问题视为基于核函数的函数逼近问题,其中核输入包含描述任务相关参数(如物体形状)的通用上下文变量。我们在现有基于不确定性量化的策略融合研究基础上,提出一种状态依赖方法,该方法能自动回归到演示轨迹,在平滑适应上下文变化的同时避免不可预测的行为。该方法在LASA手写数据集上进行了评估,并在一个真实7自由度机器人上进行了两项场景测试:抓取过程中对滑移的适应,以及对可变形食品的操作。