Source-free domain adaptation has developed rapidly in recent years, where the well-trained source model is adapted to the target domain instead of the source data, offering the potential for privacy concerns and intellectual property protection. However, a number of feature alignment techniques in prior domain adaptation methods are not feasible in this challenging problem setting. Thereby, we resort to probing inherent domain-invariant feature learning and propose a curriculum-style self-training approach for source-free domain adaptive semantic segmentation. In particular, we introduce a curriculum-style entropy minimization method to explore the implicit knowledge from the source model, which fits the trained source model to the target data using certain information from easy-to-hard predictions. We then train the segmentation network by the proposed complementary curriculum-style self-training, which utilizes the negative and positive pseudo labels following the curriculum-learning manner. Although negative pseudo-labels with high uncertainty cannot be identified with the correct labels, they can definitely indicate absent classes. Moreover, we employ an information propagation scheme to further reduce the intra-domain discrepancy within the target domain, which could act as a standard post-processing method for the domain adaptation field. Furthermore, we extend the proposed method to a more challenging black-box source model scenario where only the source model's predictions are available. Extensive experiments validate that our method yields state-of-the-art performance on source-free semantic segmentation tasks for both synthetic-to-real and adverse conditions datasets. The code and corresponding trained models are released at \url{https://github.com/yxiwang/ATP}.
翻译:近年来,无源域自适应发展迅速,其中训练有素的源模型被适配到目标域而非源数据,这为隐私关切和知识产权保护提供了潜力。然而,先前域自适应方法中的许多特征对齐技术在这一具有挑战性的问题设定中并不可行。因此,我们转而探究固有的域不变特征学习,并提出一种课程式自训练方法用于无源域自适应语义分割。具体而言,我们引入一种课程式熵最小化方法,以从源模型中挖掘隐含知识,该方法利用从易到难预测中的特定信息,使训练好的源模型适应目标数据。随后,我们通过提出的互补课程式自训练来训练分割网络,该方法按照课程学习的方式利用负伪标签和正伪标签。尽管具有高不确定性的负伪标签无法被识别为正确标签,但它们可以明确指示缺失的类别。此外,我们采用一种信息传播方案来进一步减少目标域内的域内差异,该方案可作为域自适应领域的一种标准后处理方法。进一步地,我们将所提方法扩展到更具挑战性的黑盒源模型场景,其中仅源模型的预测结果可用。大量实验验证了我们的方法在合成到真实及恶劣条件数据集的无源语义分割任务上取得了最先进的性能。代码及相应训练模型发布于 \url{https://github.com/yxiwang/ATP}。