Self-supervised models have recently achieved notable advancements, particularly in the domain of semantic occupancy prediction. These models utilize sophisticated loss computation strategies to compensate for the absence of ground-truth labels. For instance, techniques such as novel view synthesis, cross-view rendering, and depth estimation have been explored to address the issue of semantic and depth ambiguity. However, such techniques typically incur high computational costs and memory usage during the training stage, especially in the case of novel view synthesis. To mitigate these issues, we propose 3D pseudo-ground-truth labels generated by the foundation models Grounded-SAM and Metric3Dv2, and harness temporal information for label densification. Our 3D pseudo-labels can be easily integrated into existing models, which yields substantial performance improvements, with mIoU increasing by 45\%, from 9.73 to 14.09, when implemented into the OccNeRF model. This stands in contrast to earlier advancements in the field, which are often not readily transferable to other architectures. Additionally, we propose a streamlined model, EasyOcc, achieving 13.86 mIoU. This model conducts learning solely from our labels, avoiding complex rendering strategies mentioned previously. Furthermore, our method enables models to attain state-of-the-art performance when evaluated on the full scene without applying the camera mask, with EasyOcc achieving 7.71 mIoU, outperforming the previous best model by 31\%. These findings highlight the critical importance of foundation models, temporal context, and the choice of loss computation space in self-supervised learning for comprehensive scene understanding.
翻译:自监督模型近期取得了显著进展,尤其在语义占据预测领域。这些模型采用复杂的损失计算策略以弥补真实标签的缺失。例如,已有研究探索了新颖视图合成、跨视图渲染和深度估计等技术来解决语义和深度模糊性问题。然而,此类技术通常在训练阶段会产生高昂的计算成本和内存占用,尤其在新颖视图合成的情况下。为缓解这些问题,我们提出由基础模型 Grounded-SAM 和 Metric3Dv2 生成的三维伪真实标签,并利用时序信息进行标签稠密化。我们的三维伪标签可轻松集成到现有模型中,从而带来显著的性能提升:当应用于 OccNeRF 模型时,mIoU 从 9.73 提升至 14.09,增幅达 45%。这与该领域早期往往难以迁移至其他架构的进展形成鲜明对比。此外,我们提出了一种轻量化模型 EasyOcc,其 mIoU 达到 13.86。该模型仅从我们的标签进行学习,避免了前述复杂的渲染策略。进一步地,我们的方法使模型在完整场景评估(不应用相机掩码)时能够达到最先进的性能:EasyOcc 实现了 7.71 mIoU,较先前最佳模型提升 31%。这些发现凸显了基础模型、时序上下文以及损失计算空间的选择在全场景理解的自监督学习中的关键重要性。