Obstacle avoidance in unmanned aerial vehicles (UAVs), as a fundamental capability, has gained increasing attention with the growing focus on spatial intelligence. However, current obstacle-avoidance methods mainly depend on limited field-of-view sensors and are ill-suited for UAV scenarios which require full-spatial awareness when the movement direction differs from the UAV's heading. This limitation motivates us to explore omnidirectional obstacle avoidance for panoramic drones with full-view perception. We first study an under explored problem setting in which a UAV must generate collision-free motion in environments with obstacles from arbitrary directions, and then construct a benchmark that consists of three representative flight tasks. Based on such settings, we propose Fly360, a two-stage perception-decision pipeline with a fixed random-yaw training strategy. At the perception stage, panoramic RGB observations are input and converted into depth maps as a robust intermediate representation. For the policy network, it is lightweight and used to output body-frame velocity commands from depth inputs. Extensive simulation and real-world experiments demonstrate that Fly360 achieves stable omnidirectional obstacle avoidance and outperforms forward-view baselines across all tasks. Our model is available at https://zxkai.github.io/fly360/
翻译:无人机避障作为一项基础能力,随着空间智能日益受到关注而得到越来越多的重视。然而,当前的避障方法主要依赖于有限视场的传感器,当运动方向与无人机航向不一致时,这些方法难以适应需要全空间感知的无人机场景。这一局限性促使我们探索具备全景感知能力的无人机全向避障。我们首先研究了一个尚未充分探索的问题设定:无人机必须在障碍物可能来自任意方向的环境中生成无碰撞运动,并据此构建了一个包含三种代表性飞行任务的基准测试集。基于此设定,我们提出了Fly360——一个采用固定随机偏航训练策略的两阶段感知-决策流程。在感知阶段,全景RGB观测作为输入被转换为深度图,作为一种鲁棒的中间表示。策略网络则采用轻量化设计,用于从深度输入中输出机体坐标系下的速度指令。大量的仿真与真实世界实验表明,Fly360实现了稳定的全向避障,并在所有任务中优于前向视野基线方法。我们的模型发布于https://zxkai.github.io/fly360/