Achieving highly accurate and real-time 3D occupancy prediction from cameras is a critical requirement for the safe and practical deployment of autonomous vehicles. While this shift to sparse 3D representations solves the encoding bottleneck, it creates a new challenge for the decoder: how to efficiently aggregate information from a sparse, non-uniformly distributed set of voxel features without resorting to computationally prohibitive dense attention. In this paper, we propose a novel Prototype-based Sparse Transformer Decoder that replaces this costly interaction with an efficient, two-stage process of guided feature selection and focused aggregation. Our core idea is to make the decoder's attention prototype-guided. We achieve this through a sparse prototype selection mechanism, where each query adaptively identifies a compact set of the most salient voxel features, termed prototypes, for focused feature aggregation. To ensure this dynamic selection is stable and effective, we introduce a complementary denoising paradigm. This approach leverages ground-truth masks to provide explicit guidance, guaranteeing a consistent query-prototype association across decoder layers. Our model, dubbed SPOT-Occ, outperforms previous methods with a significant margin in speed while also improving accuracy. Source code is released at https://github.com/chensuzeyu/SpotOcc.
翻译:从相机实现高精度、实时的3D占据预测是自动驾驶汽车安全且实用部署的关键要求。虽然向稀疏3D表示的转变解决了编码瓶颈,但它为解码器带来了新的挑战:如何在不诉诸计算量巨大的密集注意力机制的情况下,高效地聚合来自一组稀疏、非均匀分布的体素特征的信息。本文提出了一种新颖的基于原型的稀疏Transformer解码器,它通过一个高效的两阶段过程——引导特征选择与聚焦聚合——取代了这种代价高昂的交互。我们的核心思想是使解码器的注意力机制由原型引导。我们通过一种稀疏原型选择机制实现这一点,其中每个查询自适应地识别一组最显著的体素特征(称为原型),用于聚焦特征聚合。为确保这种动态选择稳定有效,我们引入了一种互补的去噪范式。该方法利用真实掩码提供显式指导,从而保证解码器各层之间一致的查询-原型关联。我们的模型命名为SPOT-Occ,在速度上以显著优势超越先前方法,同时提高了精度。源代码发布于 https://github.com/chensuzeyu/SpotOcc。