This paper presents multi-vision-based localisation strategies for harvesting robots. Identifying picking points accurately is essential for robotic harvesting because insecure grasping can lead to economic loss through fruit damage and dropping. In this study, two multi-vision-based localisation methods, namely the analytical approach and model-based algorithms, were employed. The actual geometric centre points of fruits were collected using a motion capture system (mocap), and two different surface points Cfix and Ceih were extracted using two Red-Green-Blue-Depth (RGB-D) cameras. First, the picking points of the target fruit were detected using analytical methods. Second, various primary and ensemble learning methods were employed to predict the geometric centre of target fruits by taking surface points as input. Adaboost regression, the most successful model-based localisation algorithm, achieved 88.8% harvesting accuracy with a Mean Euclidean Distance (MED) of 4.40 mm, while the analytical approach reached 81.4% picking success with a MED of 14.25 mm, both demonstrating better performance than the single-camera, which had a picking success rate of 77.7% with a MED of 24.02 mm. To evaluate the effect of picking point accuracy in collecting fruits, a series of robotic harvesting experiments were performed utilising a collaborative robot (cobot). It is shown that multi-vision systems can improve picking point localisation, resulting in higher success rates of picking in robotic harvesting.
翻译:本文提出了基于多视觉的采摘机器人定位策略。准确定位抓取点对机器人采摘至关重要,因为不稳定的抓取会导致果实损伤和脱落,从而造成经济损失。本研究采用两种基于多视觉的定位方法:解析法和基于模型的算法。通过运动捕捉系统采集果实的实际几何中心点,并利用两台红绿蓝深度相机提取两个不同的表面点Cfix与Ceih。首先,采用解析法检测目标果实的抓取点;其次,以表面点为输入,运用多种基础学习与集成学习方法预测目标果实的几何中心。其中表现最优的基于模型定位算法——Adaboost回归模型,以4.40毫米的平均欧氏距离实现了88.8%的采摘准确率;而解析法则以14.25毫米的平均欧氏距离达到81.4%的采摘成功率。两种方法均优于单相机系统(采摘成功率77.7%,平均欧氏距离24.02毫米)。为评估抓取点精度对果实采收的影响,研究采用协作机器人进行系列采摘实验。结果表明,多视觉系统能有效提升抓取点定位精度,从而提高机器人采摘的成功率。