In robotics, Learning from Demonstration (LfD) aims to transfer skills to robots by using multiple demonstrations of the same task. These demonstrations are recorded and processed to extract a consistent skill representation. This process typically requires temporal alignment through techniques such as Dynamic Time Warping (DTW). In this paper, we introduce a novel algorithm, named Spatial Sampling (SS), specifically designed for robot trajectories, that enables time-independent alignment of the trajectories by providing an arc-length parametrization of the signals. This approach eliminates the need for temporal alignment, enhancing the accuracy and robustness of skill representation. Specifically, we show that large time shifts in the demonstrated trajectories can introduce uncertainties in the synthesis of the final trajectory, which alignment in the arc-length domain can drastically reduce, in comparison with various state-of-the-art time-based signal alignment algorithms. To this end, we built a custom publicly available dataset of robot recordings to test real-world trajectories.
翻译:在机器人学中,演示学习旨在通过同一任务的多次演示将技能迁移至机器人。这些演示被记录并处理,以提取一致的技能表征。该过程通常需要通过动态时间规整等技术进行时间对齐。本文提出一种专为机器人轨迹设计的新算法,称为空间采样。该算法通过对信号进行弧长参数化,实现了轨迹的与时间无关的对齐。此方法无需进行时间对齐,从而提高了技能表征的准确性与鲁棒性。具体而言,我们证明演示轨迹中的大幅时间偏移会在最终轨迹合成中引入不确定性,而相较于多种先进的基于时间的信号对齐算法,在弧长域进行对齐可显著降低这种不确定性。为此,我们构建了一个公开可用的定制机器人记录数据集,以测试真实世界轨迹。