Partial multi-task learning where training examples are annotated for one of the target tasks is a promising idea in remote sensing as it allows combining datasets annotated for different tasks and predicting more tasks with fewer network parameters. The na\"ive approach to partial multi-task learning is sub-optimal due to the lack of all-task annotations for learning joint representations. This paper proposes using knowledge distillation to replace the need of ground truths for the alternate task and enhance the performance of such approach. Experiments conducted on the public ISPRS 2D Semantic Labeling Contest dataset show the effectiveness of the proposed idea on partial multi-task learning for semantic tasks including object detection and semantic segmentation in aerial images.
翻译:部分多任务学习(训练样本仅标注单个目标任务)是遥感领域具有前景的研究方向,它能够整合为不同任务标注的数据集,并以更少的网络参数预测更多任务。由于缺乏全任务标注来学习联合表示,传统部分多任务学习方法存在性能局限。本文提出利用知识蒸馏替代对交替任务真实标注的需求,从而提升该方法的性能。在公开的ISPRS二维语义标注竞赛数据集上的实验表明,所提方法在航空影像目标检测与语义分割等语义任务的部分多任务学习中具有显著效果。