We propose a visual servoing method consisting of a detection network and a velocity trajectory planner. First, the detection network estimates the objects position and orientation in the image space. Furthermore, these are normalized and filtered. The direction and orientation is then the input to the trajectory planner, which considers the kinematic constrains of the used robotic system. This allows safe and stable control, since the kinematic boundary values are taken into account in planning. Also, by having direction estimation and velocity planner separated, the learning part of the method does not directly influence the control value. This also enables the transfer of the method to different robotic systems without retraining, therefore being robot agnostic. We evaluate our method on different visual servoing tasks with and without clutter on two different robotic systems. Our method achieved mean absolute position errors of <0.5 mm and orientation errors of <1{\deg}. Additionally, we transferred the method to a new system which differs in robot and camera, emphasizing robot agnostic capability of our method.
翻译:我们提出了一种由检测网络和速度轨迹规划器组成的视觉伺服方法。首先,检测网络在图像空间中估计目标的位置和姿态。随后对这些估计值进行归一化和滤波处理。处理后的方向与姿态信息作为轨迹规划器的输入,该规划器考虑了所用机器人系统的运动学约束。由于在规划过程中考虑了运动学边界值,因此能够实现安全稳定的控制。此外,通过将方向估计与速度规划器分离,该方法的学习部分不会直接影响控制值。这使得方法能够迁移至不同的机器人系统而无需重新训练,从而实现了机器人无关性。我们在两种不同机器人系统上,针对有无环境干扰的多种视觉伺服任务评估了本方法。实验结果表明,该方法实现了平均绝对位置误差<0.5 mm、姿态误差<1°的控制精度。此外,我们将方法迁移至具有不同机器人和相机配置的新系统,进一步验证了本方法的机器人无关特性。