Object detection and semantic segmentation are both scene understanding tasks yet they differ in data structure and information level. Object detection requires box coordinates for object instances while semantic segmentation requires pixel-wise class labels. Making use of one task's information to train the other would be beneficial for multi-task partially supervised learning where each training example is annotated only for a single task, having the potential to expand training sets with different-task datasets. This paper studies various weak losses for partially annotated data in combination with existing supervised losses. We propose Box-for-Mask and Mask-for-Box strategies, and their combination BoMBo, to distil necessary information from one task annotations to train the other. Ablation studies and experimental results on VOC and COCO datasets show favorable results for the proposed idea. Source code and data splits can be found at https://github.com/lhoangan/multas.
翻译:目标检测与语义分割同为场景理解任务,但二者在数据结构和信息层级上存在差异。目标检测需要物体实例的边界框坐标,而语义分割则需要像素级的类别标签。利用某一任务的信息来训练另一任务,对于多任务部分监督学习具有重要价值——其中每个训练样本仅标注了单个任务,从而有望整合不同任务的数据集以扩展训练集。本文研究了结合现有监督损失的部分标注数据弱损失方法。我们提出"边界框用于掩码"与"掩码用于边界框"策略,以及二者的组合BoMBo,旨在从某一任务的标注中提取必要信息以训练另一任务。在VOC和COCO数据集上的消融研究与实验结果表明,所提方案取得了良好效果。源代码与数据划分可在https://github.com/lhoangan/multas获取。