Identifying an appropriate task space that simplifies control solutions is important for solving robotic manipulation problems. One approach to this problem is learning an appropriate low-dimensional action space. Linear and nonlinear action mapping methods have trade-offs between simplicity on the one hand and the ability to express motor commands outside of a single low-dimensional subspace on the other. We propose that learning local linear action representations that adapt based on the current configuration of the robot achieves both of these benefits. Our state-conditioned linear maps ensure that for any given state, the high-dimensional robotic actuations are linear in the low-dimensional action. As the robot state evolves, so do the action mappings, ensuring the ability to represent motions that are immediately necessary. These local linear representations guarantee desirable theoretical properties by design, and we validate these findings empirically through two user studies. Results suggest state-conditioned linear maps outperform conditional autoencoder and PCA baselines on a pick-and-place task and perform comparably to mode switching in a more complex pouring task.
翻译:识别合适的任务空间以简化控制方案对于解决机器人操作问题至关重要。该问题的一种方法是学习适当的低维动作空间。线性和非线性动作映射方法在简单性与表达单一低维子空间之外的电机命令能力之间存在权衡。我们提出,学习基于机器人当前配置进行自适应调整的局部线性动作表示可以同时获得这两方面的优势。我们的状态条件线性映射确保对于任意给定状态,高维机器人驱动在低维动作中呈线性关系。随着机器人状态的演变,动作映射也随之变化,从而确保能够表示当前必需的运动。这些局部线性表示通过设计保证了理想的理论特性,我们通过两项用户研究实证验证了这些发现。结果表明,在拾取放置任务中,状态条件线性映射优于条件自编码器和PCA基线方法;在更复杂的倾倒任务中,其性能与模式切换方法相当。