Achieving robust generalization across diverse data domains remains a significant challenge in computer vision. This challenge is important in safety-critical applications, where deep-neural-network-based systems must perform reliably under various environmental conditions not seen during training. Our study investigates whether the generalization capabilities of Vision Foundation Models (VFMs) and Unsupervised Domain Adaptation (UDA) methods for the semantic segmentation task are complementary. Results show that combining VFMs with UDA has two main benefits: (a) it allows for better UDA performance while maintaining the out-of-distribution performance of VFMs, and (b) it makes certain time-consuming UDA components redundant, thus enabling significant inference speedups. Specifically, with equivalent model sizes, the resulting VFM-UDA method achieves an 8.4$\times$ speed increase over the prior non-VFM state of the art, while also improving performance by +1.2 mIoU in the UDA setting and by +6.1 mIoU in terms of out-of-distribution generalization. Moreover, when we use a VFM with 3.6$\times$ more parameters, the VFM-UDA approach maintains a 3.3$\times$ speed up, while improving the UDA performance by +3.1 mIoU and the out-of-distribution performance by +10.3 mIoU. These results underscore the significant benefits of combining VFMs with UDA, setting new standards and baselines for Unsupervised Domain Adaptation in semantic segmentation.
翻译:在计算机视觉领域,实现跨不同数据域的鲁棒泛化仍是一个重大挑战。这一挑战在安全关键应用中尤为重要,因为基于深度神经网络的系统必须在训练时未见过的各种环境条件下可靠地运行。本研究探讨了视觉基础模型(VFMs)与无监督域自适应(UDA)方法在语义分割任务中的泛化能力是否具有互补性。结果表明,将VFMs与UDA结合具有两个主要优势:(a)在保持VFMs分布外性能的同时,能够获得更好的UDA性能;(b)它使得某些耗时的UDA组件变得冗余,从而实现显著的推理加速。具体而言,在模型参数量相当的情况下,所提出的VFM-UDA方法相比先前非VFM的最先进方法实现了8.4倍的加速,同时在UDA设置下的性能提升了+1.2 mIoU,在分布外泛化方面提升了+6.1 mIoU。此外,当我们使用参数量多3.6倍的VFM时,VFM-UDA方法仍保持3.3倍的加速,同时将UDA性能提升+3.1 mIoU,分布外性能提升+10.3 mIoU。这些结果凸显了将VFMs与UDA相结合的显著优势,为语义分割中的无监督域自适应设定了新的标准和基线。