Pocket-sized autonomous nano-drones can revolutionize many robotic use cases, such as visual inspection in narrow, constrained spaces, and ensure safer human-robot interaction due to their tiny form factor and weight -- i.e., tens of grams. This compelling vision is challenged by the high level of intelligence needed aboard, which clashes against the limited computational and storage resources available on PULP (parallel-ultra-low-power) MCU class navigation and mission controllers that can be hosted aboard. This work moves from PULP-Dronet, a State-of-the-Art convolutional neural network for autonomous navigation on nano-drones. We introduce Tiny-PULP-Dronet: a novel methodology to squeeze by more than one order of magnitude model size (50x fewer parameters), and number of operations (27x less multiply-and-accumulate) required to run inference with similar flight performance as PULP-Dronet. This massive reduction paves the way towards affordable multi-tasking on nano-drones, a fundamental requirement for achieving high-level intelligence.
翻译:口袋大小的自主纳米无人机可以彻底改变许多机器人应用场景,例如在狭窄、受限空间内的视觉检查,并且由于其微小的外形尺寸和重量(即数十克)而确保更安全的人机交互。这一引人注目的愿景面临着机载所需高水平智能的挑战,这与可搭载于机上的PULP(并行超低功耗)MCU级导航和任务控制器上有限的计算和存储资源相冲突。本工作基于PULP-Dronet——一种用于纳米无人机自主导航的先进卷积神经网络。我们引入了Tiny-PULP-Dronet:一种新颖的方法论,旨在将模型大小(参数减少50倍)和运行推理所需的操作数量(乘加运算减少27倍)压缩超过一个数量级,同时保持与PULP-Dronet相似的飞行性能。这种大幅度的缩减为实现纳米无人机上可负担的多任务处理铺平了道路,这是实现高水平智能的基本要求。