The problem of reliably detecting and geolocating objects of different classes in soft real-time is essential in many application areas, such as Search and Rescue performed using Unmanned Aerial Vehicles (UAVs). This research addresses the complementary problems of system contextual vision-based detector selection, allocation, and execution, in addition to the fusion of detection results from teams of UAVs for the purpose of accurately and reliably geolocating objects of interest in a timely manner. In an offline step, an application-independent evaluation of vision-based detectors from a system perspective is first performed. Based on this evaluation, the most appropriate algorithms for online object detection for each platform are selected automatically before a mission, taking into account a number of practical system considerations, such as the available communication links, video compression used, and the available computational resources. The detection results are fused using a method for building maps of salient locations which takes advantage of a novel sensor model for vision-based detections for both positive and negative observations. A number of simulated and real flight experiments are also presented, validating the proposed method.
翻译:在软实时条件下可靠检测并地理定位不同类别物体的问题,在许多应用领域至关重要,例如使用无人机执行的搜救任务。本研究解决了系统基于上下文的视觉检测器选择、分配与执行的互补性问题,以及无人机编队检测结果的融合问题,旨在及时、准确、可靠地地理定位感兴趣目标。在离线阶段,首先从系统角度对基于视觉的检测器进行与具体应用无关的评估。基于该评估,在任务开始前,综合考虑一系列实际系统因素(如可用通信链路、所用视频压缩方案及可用计算资源),自动为每个平台选择最合适的在线目标检测算法。检测结果通过一种构建显著位置地图的方法进行融合,该方法利用了一种新颖的传感器模型来处理基于视觉检测的正负观测。研究还提供了多项模拟与真实飞行实验,验证了所提方法的有效性。