Many practical applications require training of semantic segmentation models on unlabelled datasets and their execution on low-resource hardware. Distillation from a trained source model may represent a solution for the first but does not account for the different distribution of the training data. Unsupervised domain adaptation (UDA) techniques claim to solve the domain shift, but in most cases assume the availability of the source data or an accessible white-box source model, which in practical applications are often unavailable for commercial and/or safety reasons. In this paper, we investigate a more challenging setting in which a lightweight model has to be trained on a target unlabelled dataset for semantic segmentation, under the assumption that we have access only to black-box source model predictions. Our method, named CoRTe, consists of (i) a pseudo-labelling function that extracts reliable knowledge from the black-box source model using its relative confidence, (ii) a pseudo label refinement method to retain and enhance the novel information learned by the student model on the target data, and (iii) a consistent training of the model using the extracted pseudo labels. We benchmark CoRTe on two synthetic-to-real settings, demonstrating remarkable results when using black-box models to transfer knowledge on lightweight models for a target data distribution.
翻译:许多实际应用需要在未标注数据集上训练语义分割模型,并在低资源硬件上执行。从已训练的源模型进行蒸馏或许能解决第一个问题,但未考虑训练数据分布差异。无监督域自适应(UDA)技术声称能解决域偏移,但在多数情况下假设源数据或可访问的白盒源模型可用,而实际应用中出于商业和/或安全原因,这些往往不可获取。本文研究了一个更具挑战性的场景:在仅能获取黑盒源模型预测结果的前提下,需在未标注的目标数据集上训练轻量级语义分割模型。我们提出的CoRTe方法包含:(i)一种基于相对置信度从黑盒源模型中提取可靠知识的伪标注函数;(ii)一种伪标签精炼方法,用于保留并增强学生模型在目标数据上学习到的新信息;(iii)利用提取的伪标签对模型进行一致性训练。我们在两个合成-真实场景上对CoRTe进行基准测试,实验表明,在黑盒模型向轻量级模型迁移目标数据分布知识时,该方法取得了显著效果。