We propose a visual servoing method consisting of a detection network and a velocity trajectory planner. First, the detection network estimates the objects position and orientation in the image space. Furthermore, these are normalized and filtered. The direction and orientation is then the input to the trajectory planner, which considers the kinematic constrains of the used robotic system. This allows safe and stable control, since the kinematic boundary values are taken into account in planning. Also, by having direction estimation and velocity planner separated, the learning part of the method does not directly influence the control value. This also enables the transfer of the method to different robotic systems without retraining, therefore being robot agnostic. We evaluate our method on different visual servoing tasks with and without clutter on two different robotic systems. Our method achieved mean absolute position errors of <0.5 mm and orientation errors of <1{\deg}. Additionally, we transferred the method to a new system which differs in robot and camera, emphasizing robot agnostic capability of our method.
翻译:我们提出一种由检测网络和速度轨迹规划器组成的视觉伺服方法。首先,检测网络估计目标物体在图像空间中的位置和姿态。接着,这些估计值被归一化和滤波处理。随后,方向和姿态信息作为轨迹规划器的输入,该规划器考虑了所用机器人系统的运动学约束。由于在规划中纳入了运动学边界值,这实现了安全稳定的控制。此外,通过将方向估计与速度规划器分离,方法中的学习部分不会直接影响控制量。这也使得该方法无需重新训练即可迁移至不同机器人系统,因此具备机器人无关性。我们在两组不同的机器人系统上,分别在有/无杂乱场景的多种视觉伺服任务中评估了该方法。该方法实现了平均位置误差小于0.5毫米、姿态误差小于1度的精度。此外,我们将该方法迁移至一个机器人本体和相机均不同的新系统,进一步凸显了方法的机器人无关能力。