Vision-based navigation systems in arable fields are an underexplored area in agricultural robot navigation. Vision systems deployed in arable fields face challenges such as fluctuating weed density, varying illumination levels, growth stages and crop row irregularities. Current solutions are often crop-specific and aimed to address limited individual conditions such as illumination or weed density. Moreover, the scarcity of comprehensive datasets hinders the development of generalised machine learning systems for navigating these fields. This paper proposes a suite of deep learning-based perception algorithms using affordable vision sensors for vision-based navigation in arable fields. Initially, a comprehensive dataset that captures the intricacies of multiple crop seasons, various crop types, and a range of field variations was compiled. Next, this study delves into the creation of robust infield perception algorithms capable of accurately detecting crop rows under diverse conditions such as different growth stages, weed density, and varying illumination. Further, it investigates the integration of crop row following with vision-based crop row switching for efficient field-scale navigation. The proposed infield navigation system was tested in commercial arable fields traversing a total distance of 4.5 km with average heading and cross-track errors of 1.24{\deg} and 3.32 cm respectively.
翻译:基于视觉的可耕地导航系统是农业机器人导航领域一个尚未充分探索的方向。部署在可耕地中的视觉系统面临诸多挑战,如杂草密度波动、光照强度变化、作物生长阶段差异以及作物行不规则性等。现有解决方案通常针对特定作物,且仅旨在解决光照或杂草密度等有限个体条件。此外,综合性数据集的匮乏阻碍了适用于此类田地的通用机器学习系统的发展。本文提出了一套基于深度学习的感知算法,利用经济型视觉传感器实现可耕地的视觉导航。首先,我们构建了一个综合性数据集,该数据集捕捉了多个作物季节、多种作物类型及一系列田间变化的复杂性。随后,本研究深入探讨了鲁棒性田间感知算法的创建,该算法能够在不同生长阶段、杂草密度和变化光照等多种条件下精确检测作物行。进一步地,本研究探索了作物行跟踪与基于视觉的作物行切换技术的集成,以实现高效的田间尺度导航。所提出的田间导航系统在商业化可耕地中进行了测试,总行驶距离达4.5公里,平均航向误差和横向跟踪误差分别为1.24°和3.32厘米。