The task of vision-based 3D occupancy prediction aims to reconstruct 3D geometry and estimate its semantic classes from 2D color images, where the 2D-to-3D view transformation is an indispensable step. Most previous methods conduct forward projection, such as BEVPooling and VoxelPooling, both of which map the 2D image features into 3D grids. However, the current grid representing features within a certain height range usually introduces many confusing features that belong to other height ranges. To address this challenge, we present Deep Height Decoupling (DHD), a novel framework that incorporates explicit height prior to filter out the confusing features. Specifically, DHD first predicts height maps via explicit supervision. Based on the height distribution statistics, DHD designs Mask Guided Height Sampling (MGHS) to adaptively decoupled the height map into multiple binary masks. MGHS projects the 2D image features into multiple subspaces, where each grid contains features within reasonable height ranges. Finally, a Synergistic Feature Aggregation (SFA) module is deployed to enhance the feature representation through channel and spatial affinities, enabling further occupancy refinement. On the popular Occ3D-nuScenes benchmark, our method achieves state-of-the-art performance even with minimal input frames. Code is available at https://github.com/yanzq95/DHD.
翻译:基于视觉的三维占据预测任务旨在从二维彩色图像中重建三维几何结构并估计其语义类别,其中二维到三维的视图变换是不可或缺的步骤。先前大多数方法采用前向投影技术,如BEVPooling和VoxelPooling,两者均将二维图像特征映射到三维网格中。然而,当前用于表示特定高度范围内特征的网格通常会引入大量属于其他高度范围的混淆特征。为应对这一挑战,我们提出了深度高度解耦(DHD),这是一个融合显式高度先验以滤除混淆特征的新型框架。具体而言,DHD首先通过显式监督预测高度图。基于高度分布统计,DHD设计了掩码引导高度采样(MGHS)来自适应地将高度图解耦为多个二值掩码。MGHS将二维图像特征投影到多个子空间中,其中每个网格仅包含合理高度范围内的特征。最后,部署协同特征聚合(SFA)模块,通过通道与空间亲和力增强特征表示,从而实现进一步的占据细化。在流行的Occ3D-nuScenes基准测试中,即使仅使用最少的输入帧,我们的方法仍取得了最先进的性能。代码发布于https://github.com/yanzq95/DHD。