We introduce PartSTAD, a method designed for the task adaptation of 2D-to-3D segmentation lifting. Recent studies have highlighted the advantages of utilizing 2D segmentation models to achieve high-quality 3D segmentation through few-shot adaptation. However, previous approaches have focused on adapting 2D segmentation models for domain shift to rendered images and synthetic text descriptions, rather than optimizing the model specifically for 3D segmentation. Our proposed task adaptation method finetunes a 2D bounding box prediction model with an objective function for 3D segmentation. We introduce weights for 2D bounding boxes for adaptive merging and learn the weights using a small additional neural network. Additionally, we incorporate SAM, a foreground segmentation model on a bounding box, to improve the boundaries of 2D segments and consequently those of 3D segmentation. Our experiments on the PartNet-Mobility dataset show significant improvements with our task adaptation approach, achieving a 7.0%p increase in mIoU and a 5.2%p improvement in mAP@50 for semantic and instance segmentation compared to the SotA few-shot 3D segmentation model.
翻译:本文提出PartSTAD,一种专为2D到3D分割提升任务自适应而设计的方法。近期研究揭示了利用2D分割模型通过少样本自适应实现高质量3D分割的优势。然而,现有方法主要关注将2D分割模型适应于渲染图像与合成文本描述带来的域偏移,而非针对3D分割任务本身进行模型优化。我们提出的任务自适应方法通过面向3D分割的目标函数对2D边界框预测模型进行微调。该方法引入自适应融合的2D边界框权重,并借助小型附加神经网络学习权重参数。此外,我们整合了基于边界框的前景分割模型SAM,以优化2D分割边界质量,从而提升3D分割的边界精度。在PartNet-Mobility数据集上的实验表明,我们的任务自适应方法相比少样本3D分割的先进模型,在语义分割与实例分割任务中分别实现了7.0%的mIoU提升与5.2%的mAP@50改进。