Occupancy prediction provides critical geometric and semantic understanding for robotics but faces efficiency-accuracy trade-offs. Current dense methods suffer computational waste on empty voxels, while sparse query-based approaches lack robustness in diverse and complex indoor scenes. In this paper, we propose DiScene, a novel sparse query-based framework that leverages multi-level distillation to achieve efficient and robust occupancy prediction. In particular, our method incorporates two key innovations: (1) a Multi-level Consistent Knowledge Distillation strategy, which transfers hierarchical representations from large teacher models to lightweight students through coordinated alignment across four levels, including encoder-level feature alignment, query-level feature matching, prior-level spatial guidance, and anchor-level high-confidence knowledge transfer and (2) a Teacher-Guided Initialization policy, employing optimized parameter warm-up to accelerate model convergence. Validated on the Occ-Scannet benchmark, DiScene achieves 23.2 FPS without depth priors while outperforming our baseline method, OPUS, by 36.1% and even better than the depth-enhanced version, OPUS†. With depth integration, DiScene† attains new SOTA performance, surpassing EmbodiedOcc by 3.7% with 1.62$\times$ faster inference speed. Furthermore, experiments on the Occ3D-nuScenes benchmark and in-the-wild scenarios demonstrate the versatility of our approach in various environments. Code and models can be accessed at https://github.com/getterupper/DiScene.
翻译:占用预测为机器人学提供关键的几何与语义理解,但面临效率与精度的权衡。当前稠密方法在空体素上存在计算浪费,而基于稀疏查询的方法在多样复杂室内场景中缺乏鲁棒性。本文提出DiScene,一种新颖的基于稀疏查询的框架,利用多层级蒸馏实现高效鲁棒的占用预测。具体而言,我们的方法包含两项关键创新:(1) 多层级一致性知识蒸馏策略,通过四个层级的协调对齐将大型教师模型的层次化表征迁移至轻量级学生模型,包括编码器级特征对齐、查询级特征匹配、先验级空间引导以及锚点级高置信度知识迁移;(2) 教师引导初始化策略,采用优化参数预热以加速模型收敛。在Occ-Scannet基准测试中验证,DiScene在无深度先验条件下达到23.2 FPS,同时超越基线方法OPUS 36.1%,甚至优于深度增强版本OPUS†。结合深度信息后,DiScene†取得新的SOTA性能,以1.62倍推理速度超越EmbodiedOcc 3.7%。此外,在Occ3D-nuScenes基准测试及实际场景中的实验证明了该方法在多样化环境中的泛化能力。代码与模型可通过 https://github.com/getterupper/DiScene 获取。