Image-based shape retrieval (IBSR) aims to retrieve 3D models from a database given a query image, hence addressing a classical task in computer vision, computer graphics, and robotics. Recent approaches typically rely on bridging the domain gap between 2D images and 3D shapes based on the use of multi-view renderings as well as task-specific metric learning to embed shapes and images into a common latent space. In contrast, we address IBSR through large-scale multi-modal pretraining and show that explicit view-based supervision is not required. Inspired by pre-aligned image--point-cloud encoders from ULIP and OpenShape that have been used for tasks such as 3D shape classification, we propose the use of pre-aligned image and shape encoders for zero-shot and standard IBSR by embedding images and point clouds into a shared representation space and performing retrieval via similarity search over compact single-embedding shape descriptors. This formulation allows skipping view synthesis and naturally enables zero-shot and cross-domain retrieval without retraining on the target database. We evaluate pre-aligned encoders in both zero-shot and supervised IBSR settings and additionally introduce a multi-modal hard contrastive loss (HCL) to further increase retrieval performance. Our evaluation demonstrates state-of-the-art performance, outperforming related methods on $Acc_{Top1}$ and $Acc_{Top10}$ for shape retrieval across multiple datasets, with best results observed for OpenShape combined with Point-BERT. Furthermore, training on our proposed multi-modal HCL yields dataset-dependent gains in standard instance retrieval tasks on shape-centric data, underscoring the value of pretraining and hard contrastive learning for 3D shape retrieval. The code will be made available via the project website.
翻译:基于图像的形状检索旨在根据查询图像从数据库中检索三维模型,从而解决计算机视觉、计算机图形学和机器人学中的经典任务。现有方法通常依赖多视角渲染来弥合二维图像与三维形状之间的领域差异,并结合任务特定的度量学习将形状与图像嵌入到共同的潜在空间中。与此不同,我们通过大规模多模态预训练来处理基于图像的形状检索任务,并证明无需显式的基于视角的监督。受ULIP和OpenShape中已用于三维形状分类等任务的预对齐图像-点云编码器的启发,我们提出使用预对齐的图像与形状编码器,通过将图像和点云嵌入到共享表示空间,并基于紧凑的单嵌入形状描述符进行相似性搜索,实现零样本和标准基于图像的形状检索。该框架无需视图合成,天然支持零样本和跨领域检索,且无需在目标数据库上重新训练。我们在零样本和监督式基于图像的形状检索两种设置下评估预对齐编码器,并进一步引入多模态困难对比损失以提升检索性能。实验结果表明,我们的方法在多个数据集上取得了最先进的性能,在$Acc_{Top1}$和$Acc_{Top10}$指标上优于相关方法,其中OpenShape与Point-BERT的组合表现最佳。此外,在我们提出的多模态困难对比损失上进行训练,能在以形状为中心的数据集上为标准实例检索任务带来数据依赖的性能提升,这凸显了预训练与困难对比学习对三维形状检索的价值。代码将通过项目网站公开。