Lightweight autonomous unmanned aerial vehicles (UAV) are emerging as a central component of a broad range of applications. However, autonomous navigation necessitates the implementation of perception algorithms, often deep neural networks (DNN), that process the input of sensor observations, such as that from cameras and LiDARs, for control logic. The complexity of such algorithms clashes with the severe constraints of these devices in terms of computing power, energy, memory, and execution time. In this paper, we propose NaviSplit, the first instance of a lightweight navigation framework embedding a distributed and dynamic multi-branched neural model. At its core is a DNN split at a compression point, resulting in two model parts: (1) the head model, that is executed at the vehicle, which partially processes and compacts perception from sensors; and (2) the tail model, that is executed at an interconnected compute-capable device, which processes the remainder of the compacted perception and infers navigation commands. Different from prior work, the NaviSplit framework includes a neural gate that dynamically selects a specific head model to minimize channel usage while efficiently supporting the navigation network. In our implementation, the perception model extracts a 2D depth map from a monocular RGB image captured by the drone using the robust simulator Microsoft AirSim. Our results demonstrate that the NaviSplit depth model achieves an extraction accuracy of 72-81% while transmitting an extremely small amount of data (1.2-18 KB) to the edge server. When using the neural gate, as utilized by NaviSplit, we obtain a slightly higher navigation accuracy as compared to a larger static network by 0.3% while significantly reducing the data rate by 95%. To the best of our knowledge, this is the first exemplar of dynamic multi-branched model based on split DNNs for autonomous navigation.
翻译:轻量级自主无人机正成为广泛应用的核心组件。然而,自主导航需要实现感知算法(通常是深度神经网络),以处理来自传感器(如相机和激光雷达)的观测输入,用于控制逻辑。此类算法的复杂性与这些设备在计算能力、能量、内存和执行时间方面的严格限制相冲突。本文提出NaviSplit,这是首个嵌入分布式动态多分支神经模型的轻量级导航框架实例。其核心是在一个压缩点分割的深度神经网络,产生两个模型部分:(1)头部模型,在飞行器上执行,对传感器感知进行部分处理和压缩;(2)尾部模型,在互连的计算能力设备上执行,处理剩余的压缩感知并推断导航指令。与先前工作不同,NaviSplit框架包含一个神经门,动态选择特定的头部模型,以最小化信道使用量,同时高效支持导航网络。在我们的实现中,感知模型使用鲁棒模拟器Microsoft AirSim,从无人机捕获的单目RGB图像中提取二维深度图。我们的结果表明,NaviSplit深度模型实现了72-81%的提取精度,同时向边缘服务器传输极少量数据(1.2-18 KB)。当使用NaviSplit所采用的神经门时,与较大的静态网络相比,我们获得了略高0.3%的导航精度,同时数据速率显著降低了95%。据我们所知,这是首个基于分割深度神经网络的动态多分支模型用于自主导航的范例。