In this paper, we present a Computer Vision (CV) based tracking and fusion algorithm, dedicated to a 3D printed gimbal system on drones operating in nature. The whole gimbal system can stabilize the camera orientation robustly in a challenging nature scenario by using skyline and ground plane as references. Our main contributions are the following: a) a light-weight Resnet-18 backbone network model was trained from scratch, and deployed onto the Jetson Nano platform to segment the image into binary parts (ground and sky); b) our geometry assumption from nature cues delivers the potential for robust visual tracking by using the skyline and ground plane as a reference; c) a spherical surface-based adaptive particle sampling, can fuse orientation from multiple sensor sources flexibly. The whole algorithm pipeline is tested on our customized gimbal module including Jetson and other hardware components. The experiments were performed on top of a building in the real landscape.
翻译:本文提出一种基于计算机视觉的跟踪与融合算法,专门针对在自然环境中运行的无人机3D打印云台系统。该云台系统通过利用天际线和地平面作为参考,能够在具有挑战性的自然场景中稳健地稳定相机朝向。我们的主要贡献如下:a) 从头训练了一个轻量级ResNet-18骨干网络模型,并部署到Jetson Nano平台上,将图像分割为二值部分(天空和地面);b) 基于自然线索的几何假设,通过使用天际线和地平面作为参考,为鲁棒视觉跟踪提供了可能性;c) 基于球面的自适应粒子采样方法,能够灵活融合来自多传感器源的朝向信息。整个算法流程在我们定制的云台模块(包括Jetson及其他硬件组件)上进行了测试。实验在真实自然景观中的建筑顶部完成。