Cross-modal contrastive distillation has recently been explored for learning effective 3D representations. However, existing methods focus primarily on modality-shared features, neglecting the modality-specific features during the pre-training process, which leads to suboptimal representations. In this paper, we theoretically analyze the limitations of current contrastive methods for 3D representation learning and propose a new framework, namely CMCR (Cross-Modal Comprehensive Representation Learning), to address these shortcomings. Our approach improves upon traditional methods by better integrating both modality-shared and modality-specific features. Specifically, we introduce masked image modeling and occupancy estimation tasks to guide the network in learning more comprehensive modality-specific features. Furthermore, we propose a novel multi-modal unified codebook that learns an embedding space shared across different modalities. Besides, we introduce geometry-enhanced masked image modeling to further boost 3D representation learning. Extensive experiments demonstrate that our method mitigates the challenges faced by traditional approaches and consistently outperforms existing image-to-LiDAR contrastive distillation methods in downstream tasks. Code will be available at https://github.com/Eaphan/CMCR.
翻译:跨模态对比蒸馏方法近期被探索用于学习有效的三维表示。然而,现有方法主要关注模态共享特征,在预训练过程中忽视了模态特定特征,这导致了次优的表示学习。本文从理论上分析了当前对比方法在三维表示学习中的局限性,并提出了一种新框架——CMCR(跨模态全面表示学习)以解决这些不足。我们的方法通过更好地整合模态共享特征和模态特定特征,改进了传统方法。具体而言,我们引入了掩码图像建模和占用估计任务,以指导网络学习更全面的模态特定特征。此外,我们提出了一种新颖的多模态统一码本,用于学习跨不同模态的共享嵌入空间。同时,我们引入了几何增强的掩码图像建模,以进一步提升三维表示学习。大量实验表明,我们的方法缓解了传统方法面临的挑战,并在下游任务中持续优于现有的图像到激光雷达对比蒸馏方法。代码将在 https://github.com/Eaphan/CMCR 公开。